首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper introduces an estimation method based on Least Squares Support Vector Machines (LS-SVMs) for approximating time-varying as well as constant parameters in deterministic parameter-affine delay differential equations (DDEs). The proposed method reduces the parameter estimation problem to an algebraic optimization problem. Thus, as opposed to conventional approaches, it avoids iterative simulation of the given dynamical system and therefore a significant speedup can be achieved in the parameter estimation procedure. The solution obtained by the proposed approach can be further utilized for initialization of the conventional nonconvex optimization methods for parameter estimation of DDEs. Approximate LS-SVM based models for the state and its derivative are first estimated from the observed data. These estimates are then used for estimation of the unknown parameters of the model. Numerical results are presented and discussed for demonstrating the applicability of the proposed method.  相似文献   

2.
Numerical interpolation methods are essential for the estimation of nonlinear functions and they have a wide range of applications in economics and accounting. In this regard, the idea of using interpolation methods based on multiplicative calculus for suitable accounting problems is self-evident. The purpose of this study, therefore, is to develop a way to better estimate the learning curve, which is an exponentially decreasing function, based on multiplicative Lagrange interpolation. The results of this study show that the proposed multiplicative method of learning curve provides more accurate estimates of labour costs when compared to the conventional methods. This is because the exponential functions are linear in multiplicative calculus. Furthermore, the results reveal that using the proposed method enables cost and managerial accountants to better calculate both cost of unused capacity and product cost in a cumulative production represented by a nonlinear function. The results of this study are also expected to help researchers, practitioners, economists, business managers, and cost and managerial accountants to understand how to construct a multiplicative based learning curve to improve such decisions as pricing, profit planning, capacity management, and budgeting.  相似文献   

3.
This paper derives a residual based interactive stochastic gradient (ISG) parameter estimation algorithm for controlled moving average (CMA) models and studied the performance of the residual based ISG algorithm under weaker conditions on statistical properties of the noise. Compared with the residual based extended stochastic gradient algorithm for identifying CMA models, the proposed ISG algorithm can give highly accurate parameter estimates by the simulation example.  相似文献   

4.
The Markov-switching GARCH model allows for a GARCH structure with time-varying parameters. This flexibility is unfortunately undermined by a path dependence problem which complicates the parameter estimation process. This problem led to the development of computationally intensive estimation methods and to simpler techniques based on an approximation of the model, known as collapsing procedures. This article develops an original algorithm to conduct maximum likelihood inference in the Markov-switching GARCH model, generalizing and improving previously proposed collapsing approaches. A new relationship between particle filtering and collapsing procedures is established which reveals that this algorithm corresponds to a deterministic particle filter. Simulation and empirical studies show that the proposed method allows for a fast and accurate estimation of the model.  相似文献   

5.
The theory of Gaussian graphical models is a powerful tool for independence analysis between continuous variables. In this framework, various methods have been conceived to infer independence relations from data samples. However, most of them result in stepwise, deterministic, descent algorithms that are inadequate for solving this issue. More recent developments have focused on stochastic procedures, yet they all base their research on strong a priori knowledge and are unable to perform model selection among the set of all possible models. Moreover, convergence of the corresponding algorithms is slow, precluding applications on a large scale. In this paper, we propose a novel Bayesian strategy to deal with structure learning. Relating graphs to their supports, we convert the problem of model selection into that of parameter estimation. Use of non-informative priors and asymptotic results yield a posterior probability for independence graph supports in closed form. Gibbs sampling is then applied to approximate the full joint posterior density. We finally give three examples of structure learning, one from synthetic data, and the two others from real data.  相似文献   

6.
The challenges of understanding the impacts of air pollution require detailed information on the state of air quality. While many modeling approaches attempt to treat this problem, physically-based deterministic methods are often overlooked due to their costly computational requirements and complicated implementation. In this work we extend a non-intrusive Reduced Basis Data Assimilation method (known as PBDW state estimation) to large pollutant dispersion case studies relying on equations involved in chemical transport models for air quality modeling. This, with the goal of rendering methods based on parameterized partial differential equations (PDE) feasible in air quality modeling applications requiring quasi-real-time approximation and correction of model error in imperfect models. Reduced basis methods (RBM) aim to compute a cheap and accurate approximation of a physical state using approximation spaces made of a suitable sample of solutions to the model. One of the keys of these techniques is the decomposition of the computational work into an expensive one-time offline stage and a low-cost parameter-dependent online stage. Traditional RBMs require modifying the assembly routines of the computational code, an intrusive procedure which may be impossible in cases of operational model codes. We propose a less intrusive reduced order method using data assimilation for measured pollution concentrations, adapted for consideration of the scale and specific application to exterior pollutant dispersion as can be found in urban air quality studies. Common statistical techniques of data assimilation in use in these applications require large historical data sets, or time-consuming iterative methods. The method proposed here avoids both disadvantages. In the case studies presented in this work, the method allows to correct for unmodeled physics and treat cases of unknown parameter values, all while significantly reducing online computational time.  相似文献   

7.
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficult task, and most prevalent learning methods treat parameter estimation as a regression problem. The drawback of this approach is that by not directly attempting to find the parameter estimates that maximize the likelihood, there is no principled way of performing subsequent model selection using those parameter estimates. In this paper we describe an estimation method that directly aims at learning the parameters of an MTE potential following a maximum likelihood approach. Empirical results demonstrate that the proposed method yields significantly better likelihood results than existing regression-based methods. We also show how model selection, which in the case of univariate MTEs amounts to partitioning the domain and selecting the number of exponential terms, can be performed using the BIC score.  相似文献   

8.
One of the hardest challenges in building a realistic Bayesian Network (BN) model is to construct the node probability tables (NPTs). Even with a fixed predefined model structure and very large amounts of relevant data, machine learning methods do not consistently achieve great accuracy compared to the ground truth when learning the NPT entries (parameters). Hence, it is widely believed that incorporating expert judgments can improve the learning process. We present a multinomial parameter learning method, which can easily incorporate both expert judgments and data during the parameter learning process. This method uses an auxiliary BN model to learn the parameters of a given BN. The auxiliary BN contains continuous variables and the parameter estimation amounts to updating these variables using an iterative discretization technique. The expert judgments are provided in the form of constraints on parameters divided into two categories: linear inequality constraints and approximate equality constraints. The method is evaluated with experiments based on a number of well-known sample BN models (such as Asia, Alarm and Hailfinder) as well as a real-world software defects prediction BN model. Empirically, the new method achieves much greater learning accuracy (compared to both state-of-the-art machine learning techniques and directly competing methods) with much less data. For example, in the software defects BN for a sample size of 20 (which would be considered difficult to collect in practice) when a small number of real expert constraints are provided, our method achieves a level of accuracy in parameter estimation that can only be matched by other methods with much larger sample sizes (320 samples required for the standard machine learning method, and 105 for the directly competing method with constraints).  相似文献   

9.
In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing a kernel density representation of the data, the estimation method relies on the specification of a kernel bandwidth. We show that in most cases the method is robust wrt. the choice of bandwidth, but for certain data sets the bandwidth has a strong impact on the result. Based on this observation, we propose an alternative learning method that relies on the cumulative distribution function of the data.Empirical results demonstrate the usefulness of the approaches: Even though the methods produce estimators that are slightly poorer than the state of the art (in terms of log-likelihood), they are significantly faster, and therefore indicate that the MoTBF framework can be used for inference and learning in reasonably sized domains. Furthermore, we show how a particular sub-class of MoTBF potentials (learnable by the proposed methods) can be exploited to significantly reduce complexity during inference.  相似文献   

10.
An assembly/disassembly (A/D) network is a manufacturing system in which machines perform assembly and/or disassembly operations. We consider tree-structured systems of unreliable machines that produce discrete parts. Processing times, times to failure and times to repair in the inhomogeneous system are assumed to be stochastic and machine-dependent. Machines are separated by buffers of limited capacity. We develop Markov process models for discrete time and continuous time systems and derive approximate decomposition equations to determine performance measures such as production rate and average buffer levels in an iterative algorithm. An improved parameter updating procedure leads to a dramatic improvement with respect to convergence reliability. Numerical results demonstrate that the methods are quite accurate.  相似文献   

11.
主要考虑了生长曲线模型中的参数矩阵的估计.首先基于Potthoff-Roy变换后的生长曲线模型,采用不同的惩罚函数:Hard Thresholding函数,LASSO,ENET,改进LASSO,SACD给出了参数矩阵的惩罚最小二乘估计.接着对不做变换的生长曲线模型,直接定义其惩罚最小二乘估计,基于Nelder-Mead法给出了估计的数值解算法.最后对提出的参数估计方法进行了数据模拟.结果表明自适应LASSO在估计方面效果比较好.  相似文献   

12.
Abstract

In reduced form default models, the instantaneous default intensity is the classical modelling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature tends to specify the cumulative hazard process directly. Within this framework we present a new model class where cumulative hazards are described by self-similar additive processes, also known as Sato processes. Furthermore, we analyse specifications obtained via a simple deterministic time change of a homogeneous Lévy process. While the processes in these two classes share the same average behaviour over time, the associated intensities exhibit very different properties. Concrete specifications are calibrated to data on all the single names included in the iTraxx Europe index. The performances are compared with those of the classical Cox–Ingersoll–Ross intensity and a recently proposed class of intensity models based on Ornstein–Uhlenbeck-type processes. It is shown that the time-inhomogeneous Lévy models achieve comparable calibration errors with fewer parameters and with more stable parameter estimates over time. However, the calibration performance of the Sato processes and the time-change specifications are practically indistinguishable.  相似文献   

13.
In this work, we discuss the use of a local model developed previously [1] that describes the multiphase flow of gaseous species and liquid water within a single coal seam to investigate the gas production from a spatially heterogeneous production field. The field is located within the Surat Basin in Queensland, and is composed of a total of 80 production wells spread over a region covering approximately 36 km2. However, not every well is producing gas at any one time and so in this work we take a subset of 42 wells that are the top-producing wells in terms of total gas volume.We utilise a population of models approach to understand the variability in the underlying physical processes, and as a mechanism for dealing with the spatial heterogeneity that arises due to geological variation across the field. We are able to simultaneously obtain a family of parameter sets for each of these wells, in which each set in the family yields a predicted cumulative total gas production curve that matches the measured cumulative production curve for a given well to within an allowable limit of error.By analysing the results of this population of models approach we can identify the similarities between wells based on the parameter distributions, and understand the sensitivity of key model parameters. We show by example that high correlation between wells based on their parameter values may be an indicator of their similarity. A combinatorial sum of the predicted gas production is compared against the individual gas volumes (given in terms of percentage of the total volume) measured at the compression facility as a way of further calibrating a subpopulation of models.  相似文献   

14.
A random model approach for the LASSO   总被引:1,自引:0,他引:1  
The least absolute selection and shrinkage operator (LASSO) is a method of estimation for linear models similar to ridge regression. It shrinks the effect estimates, potentially shrinking some to be identically zero. The amount of shrinkage is governed by a single parameter. Using a random model formulation of the LASSO, this parameter can be specified as the ratio of dispersion parameters. These parameters are estimated using an approximation to the marginal likelihood of the observed data. The observed score equations from the approximation are biased and hence are adjusted by subtracting an empirical estimate of the expected value. After estimation, the model effects can be tested (via simulation) as the distribution of the observed data given that all model effects are zero is known. Two related simulation studies are presented that show that dispersion parameter estimation results in effect estimates that are competitive with other estimation methods (including other LASSO methods).  相似文献   

15.
Analysis of uncertainty is often neglected in the evaluation of complex systems models, such as computational models used in hydrology or ecology. Prediction uncertainty arises from a variety of sources, such as input error, calibration accuracy, parameter sensitivity and parameter uncertainty. In this study, various computational approaches were investigated for analysing the impact of parameter uncertainty on predictions of streamflow for a water-balance hydrological model used in eastern Australia. The parameters and associated equations which had greatest impact on model output were determined by combining differential error analysis and Monte Carlo simulation with stochastic and deterministic sensitivity analysis. This integrated approach aids in the identification of insignificant or redundant parameters and provides support for further simplifications in the mathematical structure underlying the model. Parameter uncertainty was represented by a probability distribution and simulation experiments revealed that the shape (skewness) of the distribution had a significant effect on model output uncertainty. More specifically, increasing negative skewness of the parameter distribution correlated with decreasing width of the model output confidence interval (i.e. resulting in less uncertainty). For skewed distributions, characterisation of uncertainty is more accurate using the confidence interval from the cumulative distribution rather than using variance. The analytic approach also identified the key parameters and the non-linear flux equation most influential in affecting model output uncertainty.  相似文献   

16.
《Applied Mathematical Modelling》2014,38(11-12):2800-2818
Electrical discharge machining (EDM) is inherently a stochastic process. Predicting the output of such a process with reasonable accuracy is rather difficult. Modern learning based methodologies, being capable of reading the underlying unseen effect of control factors on responses, appear to be effective in this regard. In the present work, support vector machine (SVM), one of the supervised learning methods, is applied for developing the model of EDM process. Gaussian radial basis function and ε-insensitive loss function are used as kernel function and loss function respectively. Separate models of material removal rate (MRR) and average surface roughness parameter (Ra) are developed by minimizing the mean absolute percentage error (MAPE) of training data obtained for different set of SVM parameter combinations. Particle swarm optimization (PSO) is employed for the purpose of optimizing SVM parameter combinations. Models thus developed are then tested with disjoint testing data sets. Optimum parameter settings for maximum MRR and minimum Ra are further investigated applying PSO on the developed models.  相似文献   

17.
肖燕婷  田铮  孙瑾 《数学杂志》2015,35(5):1075-1085
本文研究了核实数据下的协变量带有测量误差的非线性半参数EV模型.在不假定测量误差结构的情形下,利用最小二乘方法和核光滑技术,构造了非线性函数中未知参数的两种估计,证明了未知参数估计的渐近正态性.通过数值模拟说明所提估计方法在有限样本下的有效性.  相似文献   

18.
In this paper we consider complex deterministic problems, where there are two models that can be used to predict the performance for a given design. One of the models can give a precise estimation, but is complex and time consuming. The other model is simple and fast, but can only give a very crude estimation. We have proposed a learning-based ordinal optimization approach to tackle this problem. In this approach, we first run a simple model for all the designs and a complex model for a few designs, and then, through regression analysis, we estimate the noise trend, and this noise trend together with the crude estimates from the simple model will be used to screen the designs. The proposed approach is applied to solve an integrally bladed rotor (IBR) manufacturing problem where the production sequence and the production parameters need to be determined in order to minimize the overall manufacturing cost while satisfying the manufacturing constraints. The results indicate that, by using a very crude and simple model, we are able to identify good designs with a high degree of confidence.  相似文献   

19.
The binomial software reliability growth model (SRGM) contains most existing SRGMs proposed in earlier work as special cases, and can describe every software failure-occurrence pattern in continuous time. In this paper, we propose generalized binomial SRGMs in both continuous and discrete time, based on the idea of cumulative Bernoulli trials. It is shown that the proposed models give some new unusual discrete models as well as the well-known continuous SRGMs. Through numerical examples with actual software failure data, two estimation methods for model parameters with grouped data are provided, and the predictive model performance is examined quantitatively.  相似文献   

20.
The quality of the estimation of a latent segment model when only store-level aggregate data is available seems to be dependent on the computational methods selected and in particular on the optimization methodology used to obtain it. Following the stream of work that emphasizes the estimation of a segmentation structure with aggregate data, this work proposes an optimization method, among the deterministic optimization methods, that can provide estimates for segment characteristics as well as size, brand/product preferences and sensitivity to price and price promotion variation estimates that can be accommodated in dynamic models. It is shown that, among the gradient based optimization methods that were tested, the Sequential Quadratic Programming method (SQP) is the only that, for all scenarios tested for this type of problem, guarantees of reliability, precision and efficiency being robust, i.e., always able to deliver a solution. Therefore, the latent segment models can be estimated using the SQP method when only aggregate market data is available.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号