首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper addresses the development of a new algorithm forparameter estimation of ordinary differential equations. Here,we show that (1) the simultaneous approach combined with orthogonalcyclic reduction can be used to reduce the estimation problemto an optimization problem subject to a fixed number of equalityconstraints without the need for structural information to devisea stable embedding in the case of non-trivial dichotomy and(2) the Newton approximation of the Hessian information of theLagrangian function of the estimation problem should be usedin cases where hypothesized models are incorrect or only a limitedamount of sample data is available. A new algorithm is proposedwhich includes the use of the sequential quadratic programming(SQP) Gauss–Newton approximation but also encompassesthe SQP Newton approximation along with tests of when to usethis approximation. This composite approach relaxes the restrictionson the SQP Gauss–Newton approximation that the hypothesizedmodel should be correct and the sample data set large enough.This new algorithm has been tested on two standard problems.  相似文献   

2.
Generalized linear latent variable models (GLLVMs) are a powerful class of models for understanding the relationships among multiple, correlated responses. Estimation, however, presents a major challenge, as the marginal likelihood does not possess a closed form for nonnormal responses. We propose a variational approximation (VA) method for estimating GLLVMs. For the common cases of binary, ordinal, and overdispersed count data, we derive fully closed-form approximations to the marginal log-likelihood function in each case. Compared to other methods such as the expectation-maximization algorithm, estimation using VA is fast and straightforward to implement. Predictions of the latent variables and associated uncertainty estimates are also obtained as part of the estimation process. Simulations show that VA estimation performs similar to or better than some currently available methods, both at predicting the latent variables and estimating their corresponding coefficients. They also show that VA estimation offers dramatic reductions in computation time particularly if the number of correlated responses is large relative to the number of observational units. We apply the variational approach to two datasets, estimating GLLVMs to understanding the patterns of variation in youth gratitude and for constructing ordination plots in bird abundance data. R code for performing VA estimation of GLLVMs is available online. Supplementary materials for this article are available online.  相似文献   

3.
Joint latent class modeling of disease prevalence and high-dimensional semicontinuous biomarker data has been proposed to study the relationship between diseases and their related biomarkers. However, statistical inference of the joint latent class modeling approach has proved very challenging due to its computational complexity in seeking maximum likelihood estimates. In this article, we propose a series of composite likelihoods for maximum composite likelihood estimation, as well as an enhanced Monte Carlo expectation–maximization (MCEM) algorithm for maximum likelihood estimation, in the context of joint latent class models. Theoretically, the maximum composite likelihood estimates are consistent and asymptotically normal. Numerically, we have shown that, as compared to the MCEM algorithm that maximizes the full likelihood, not only the composite likelihood approach that is coupled with the quasi-Newton method can substantially reduce the computational complexity and duration, but it can simultaneously retain comparative estimation efficiency.  相似文献   

4.
Batch process industries are characterized by complex precedence relationships among operations, which makes the estimation of an acceptable workload very difficult. Previous research indicated that a regression-based model that uses aggregate job set characteristics may be used to support order acceptance decisions. Applications of such models in real-life assume that sufficient historical data on job sets and the corresponding makespans are available. In practice, however, historical data maybe very limited and may not be sufficient to produce accurate regression estimates. This paper shows that such a lack of data significantly impacts the performance of regression-based order acceptance procedures. To resolve this problem, we devised a method that uses the bootstrap principle. A simulation study shows that performance improvements are obtained when using the parameters estimated from the bootstrapped data set, demonstrating that this bootstrapping procedure can indeed solve the limited data problem in production control.  相似文献   

5.
In the past decade, significant progress has been made in understanding problem complexity of discrete constraint problems. In contrast, little similar work has been done for constraint problems in the continuous domain. In this paper, we study the complexity of typical methods for non-linear constraint problems and present hybrid solvers with improved performance. To facilitate the empirical study, we propose a new test-case generator for generating non-linear constraint satisfaction problems (CSPs) and constrained optimization problems (COPs). The optimization methods tested include a sequential quadratic programming (SQP) method, a penalty method with a fixed penalty function, a penalty method with a sequence of penalty functions, and an augmented Lagrangian method. For hybrid solvers, we focus on the form that combines two or more optimization methods in sequence. In the experiments, we apply these methods to solve a series of continuous constraint problems with increasing constraint-to-variable ratios. The test problems include artificial benchmark problems from the test-case generator and problems derived from controlling a hyper-redundant modular manipulator. We obtain novel results on complexity phase transition phenomena of the various methods. Specifically, for constraint satisfaction problems, the SQP method is the best on weakly constrained problems, whereas the augmented Lagrangian method is the best on highly constrained ones. Although the static penalty method performs poorly by itself, by combining it with the SQP method, we show a hybrid solver that is significantly better than any of the individual methods on problems with moderate to large constraint-to-variable ratios. For constrained optimization problems, the hybrid solver obtains much better solutions than SQP, while spending comparable amount of time. In addition, the hybrid solver is flexible and can achieve good results on time-bounded applications by setting parameters according to the time limits.  相似文献   

6.
A random model approach for the LASSO   总被引:1,自引:0,他引:1  
The least absolute selection and shrinkage operator (LASSO) is a method of estimation for linear models similar to ridge regression. It shrinks the effect estimates, potentially shrinking some to be identically zero. The amount of shrinkage is governed by a single parameter. Using a random model formulation of the LASSO, this parameter can be specified as the ratio of dispersion parameters. These parameters are estimated using an approximation to the marginal likelihood of the observed data. The observed score equations from the approximation are biased and hence are adjusted by subtracting an empirical estimate of the expected value. After estimation, the model effects can be tested (via simulation) as the distribution of the observed data given that all model effects are zero is known. Two related simulation studies are presented that show that dispersion parameter estimation results in effect estimates that are competitive with other estimation methods (including other LASSO methods).  相似文献   

7.
A data analysis method is proposed to derive a latent structure matrix from a sample covariance matrix. The matrix can be used to explore the linear latent effect between two sets of observed variables. Procedures with which to estimate a set of dependent variables from a set of explanatory variables by using latent structure matrix are also proposed. The proposed method can assist the researchers in improving the effectiveness of the SEM models by exploring the latent structure between two sets of variables. In addition, a structure residual matrix can also be derived as a by-product of the proposed method, with which researchers can conduct experimental procedures for variables combinations and selections to build various models for hypotheses testing. These capabilities of data analysis method can improve the effectiveness of traditional SEM methods in data property characterization and models hypotheses testing. Case studies are provided to demonstrate the procedure of deriving latent structure matrix step by step, and the latent structure estimation results are quite close to the results of PLS regression. A structure coefficient index is suggested to explore the relationships among various combinations of variables and their effects on the variance of the latent structure.  相似文献   

8.
Poyiadjis, Doucet, and Singh showed how particle methods can be used to estimate both the score and the observed information matrix for state–space models. These methods either suffer from a computational cost that is quadratic in the number of particles, or produce estimates whose variance increases quadratically with the amount of data. This article introduces an alternative approach for estimating these terms at a computational cost that is linear in the number of particles. The method is derived using a combination of kernel density estimation, to avoid the particle degeneracy that causes the quadratically increasing variance, and Rao–Blackwellization. Crucially, we show the method is robust to the choice of bandwidth within the kernel density estimation, as it has good asymptotic properties regardless of this choice. Our estimates of the score and observed information matrix can be used within both online and batch procedures for estimating parameters for state–space models. Empirical results show improved parameter estimates compared to existing methods at a significantly reduced computational cost. Supplementary materials including code are available.  相似文献   

9.
In contrast to traditional regression analysis, latent variable modelling (LVM) can explicitly differentiate between measurement errors and other random disturbances in the specification and estimation of econometric models. This paper argues that LVM could be a promising approach to test economic theories because applied research in business and economics is based on statistical information, which is frequently inaccurately measured. Considering the theory of industry‐price determination, where the price variables involved are known to include a large measurement error, a latent variable, structural‐equations model is constructed and applied to data on 7381 product categories classified into 295 manufacturing industries of the USA economy. The obtained estimates, compared and evaluated against a traditional regression model fitted to the same data, show the advantages of the LVM analytical framework, which could lead a long drawn‐out conflict between empirical results and theory to a satisfactory reconciliation. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper, the feasible type SQP method is improved. A new SQP algorithm is presented to solve the nonlinear inequality constrained optimization. As compared with the existing SQP methods, per single iteration, in order to obtain the search direction, it is only necessary to solve equality constrained quadratic programming subproblems and systems of linear equations. Under some suitable conditions, the global and superlinear convergence can be induced.  相似文献   

11.
We discuss efficient methods for computing gradients in inverse problems for estimation of distributions for individual parameters in models where only aggregate or population level data is available. The ideas are illustrated with two examples arising in applications.  相似文献   

12.
This paper deals with the issue of estimating production frontier and measuring efficiency from a panel data set. First, it proposes an alternate method for the estimation of a production frontier on a short panel data set. The method is based on the so-called mean-and-covariance structure analysis which is closely related to the generalized method of moments. One advantage of the method is that it allows us to investigate the presence of correlations between individual effects and exogenous variables without the requirement of some available instruments uncorrelated with the individual effects as in instrumental variable estimation. Another advantage is that the method is well suited to a panel data set with a short number of periods. Second, the paper considers the question of recovering individual efficiency levels from the estimates obtained from the mean-and-covariance structure analysis. Since individual effects are here viewed as latent variables, they can be estimated as factor scores, i.e., weighted sums of the observed variables. We illustrate the proposed methods with the estimation of a stochastic production frontier on a short panel data of French fruit growers.  相似文献   

13.
This paper introduces an estimation method based on Least Squares Support Vector Machines (LS-SVMs) for approximating time-varying as well as constant parameters in deterministic parameter-affine delay differential equations (DDEs). The proposed method reduces the parameter estimation problem to an algebraic optimization problem. Thus, as opposed to conventional approaches, it avoids iterative simulation of the given dynamical system and therefore a significant speedup can be achieved in the parameter estimation procedure. The solution obtained by the proposed approach can be further utilized for initialization of the conventional nonconvex optimization methods for parameter estimation of DDEs. Approximate LS-SVM based models for the state and its derivative are first estimated from the observed data. These estimates are then used for estimation of the unknown parameters of the model. Numerical results are presented and discussed for demonstrating the applicability of the proposed method.  相似文献   

14.
This paper enhances cost efficiency measurement methods to account for different scenarios relating to input price information. These consist of situations where prices are known exactly at each decision making unit (DMU) and situations with incomplete price information. The main contribution of this paper consists of the development of a method for the estimation of upper and lower bounds for the cost efficiency (CE) measure in situations of price uncertainty, where only the maximal and minimal bounds of input prices can be estimated for each DMU. The bounds of the CE measure are obtained from assessments in the light of the most favourable price scenario (optimistic perspective) and the least favourable price scenario (pessimistic perspective). The assessments under price uncertainty are based on extensions to the Data Envelopment Analysis (DEA) model that incorporate weight restrictions of the form of input cone assurance regions. The applicability of the models developed is illustrated in the context of the analysis of bank branch performance. The results obtained in the case study showed that the DEA models can provide robust estimates of cost efficiency even in situations of price uncertainty.  相似文献   

15.
This paper develops credibility predictors of aggregate losses using a longitudinal data framework. For a model of aggregate losses, the interest is in predicting both the claims number process as well as the claims amount process. In a longitudinal data framework, one encounters data from a cross-section of risk classes with a history of insurance claims available for each risk class. Further, explanatory variables for each risk class over time are available to help explain and predict both the claims number and claims amount process.For the marginal claims distributions, this paper uses generalized linear models, an extension of linear regression, to describe cross-sectional characteristics. Elliptical copulas are used to model the dependencies over time, extending prior work that used multivariate t-copulas. The claims number process is represented using a Poisson regression model that is conditioned on a sequence of latent variables. These latent variables drive the serial dependencies among claims numbers; their joint distribution is represented using an elliptical copula. In this way, the paper provides a unified treatment of both the continuous claims amount and discrete claims number processes.The paper presents an illustrative example of Massachusetts automobile claims. Estimates of the latent claims process parameters are derived and simulated predictions are provided.  相似文献   

16.
In multivariate categorical data, models based on conditional independence assumptions, such as latent class models, offer efficient estimation of complex dependencies. However, Bayesian versions of latent structure models for categorical data typically do not appropriately handle impossible combinations of variables, also known as structural zeros. Allowing nonzero probability for impossible combinations results in inaccurate estimates of joint and conditional probabilities, even for feasible combinations. We present an approach for estimating posterior distributions in Bayesian latent structure models with potentially many structural zeros. The basic idea is to treat the observed data as a truncated sample from an augmented dataset, thereby allowing us to exploit the conditional independence assumptions for computational expediency. As part of the approach, we develop an algorithm for collapsing a large set of structural zero combinations into a much smaller set of disjoint marginal conditions, which speeds up computation. We apply the approach to sample from a semiparametric version of the latent class model with structural zeros in the context of a key issue faced by national statistical agencies seeking to disseminate confidential data to the public: estimating the number of records in a sample that are unique in the population on a set of publicly available categorical variables. The latent class model offers remarkably accurate estimates of population uniqueness, even in the presence of a large number of structural zeros.  相似文献   

17.
Contour maps are widely used to display estimates of spatial fields. Instead of showing the estimated field, a contour map only shows a fixed number of contour lines for different levels. However, despite the ubiquitous use of these maps, the uncertainty associated with them has been given a surprisingly small amount of attention. We derive measures of the statistical uncertainty, or quality, of contour maps, and use these to decide an appropriate number of contour lines, which relates to the uncertainty in the estimated spatial field. For practical use in geostatistics and medical imaging, computational methods are constructed, that can be applied to Gaussian Markov random fields, and in particular be used in combination with integrated nested Laplace approximations for latent Gaussian models. The methods are demonstrated on simulated data and an application to temperature estimation is presented.  相似文献   

18.
We present an approach for penalized tensor decomposition (PTD) that estimates smoothly varying latent factors in multiway data. This generalizes existing work on sparse tensor decomposition and penalized matrix decompositions, in a manner parallel to the generalized lasso for regression and smoothing problems. Our approach presents many nontrivial challenges at the intersection of modeling and computation, which are studied in detail. An efficient coordinate-wise optimization algorithm for PTD is presented, and its convergence properties are characterized. The method is applied both to simulated data and real data on flu hospitalizations in Texas and motion-capture data from video cameras. These results show that our penalized tensor decomposition can offer major improvements on existing methods for analyzing multiway data that exhibit smooth spatial or temporal features.  相似文献   

19.
By applying the option pricing theory ideas, this paper models the estimation of firm value distribution function as an entropy optimization problem, subject to correlation constraints. It is shown that the problem can be converted to a dual of a computationally attractive primal geometric programming (GP) problem and easily solved using publicly available software. A numerical example involving stock price data from a Japanese company demonstrates the practical value of the GP approach. Noting the use of Monte Carlo simulation in option pricing and risk analysis and its difficulties in handling distribution functions subject to correlations, the GP based method discussed here may have some computational advantages in wider areas of computational finance in addition to the application discussed here.  相似文献   

20.
We propose a demand estimation method to discover heterogeneous consumer groups. The estimation requires only historical sales data and product availability. Consumers belonging to different segments possess heterogeneous preferences and, in turn, heterogeneous substitution behaviors. For such consumers, the latent class consumer choice model can better represent their heterogeneous purchasing behaviors. In the latent class choice model, there are multiple consumer segments, and the segment types are not observable to the retailer. The expectation-maximization (EM) method is developed to jointly estimate the arrival rate and the parameters of the choice model. The developed method enables a simple estimation procedure by treating the observed data as incomplete observations of the consumer type along with consumer’s first choice. The first choice is the choice before the substitution effects occur. We test the procedure on simulated data sets. The results show that the procedure effectively detects heterogeneous consumer segments that have significant market presence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号