首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
王佳  丁洁丽 《数学杂志》2015,35(6):1521-1532
本文研究了Newton-Raphson等算法无法进行时探寻更加稳定的数值解法的问题.利用B¨ohningLinday(1988)提出的二次下界算法(Quadratic lower-bound),文中在Logistic回归模型下构造了极大似然函数的代理函数并进行数值模拟,获得了二次下界算法是Newton-Raphson算法的合理替代的结果,推广了数值方法在Logistic回归模型中的应用.  相似文献   

2.
Abstract

We present a computational approach to the method of moments using Monte Carlo simulation. Simple algebraic identities are used so that all computations can be performed directly using simulation draws and computation of the derivative of the log-likelihood. We present a simple implementation using the Newton-Raphson algorithm with the understanding that other optimization methods may be used in more complicated problems. The method can be applied to families of distributions with unknown normalizing constants and can be extended to least squares fitting in the case that the number of moments observed exceeds the number of parameters in the model. The method can be further generalized to allow “moments” that are any function of data and parameters, including as a special case maximum likelihood for models with unknown normalizing constants or missing data. In addition to being used for estimation, our method may be useful for setting the parameters of a Bayes prior distribution by specifying moments of a distribution using prior information. We present two examples—specification of a multivariate prior distribution in a constrained-parameter family and estimation of parameters in an image model. The former example, used for an application in pharmacokinetics, motivated this work. This work is similar to Ruppert's method in stochastic approximation, combines Monte Carlo simulation and the Newton-Raphson algorithm as in Penttinen, uses computational ideas and importance sampling identities of Gelfand and Carlin, Geyer, and Geyer and Thompson developed for Monte Carlo maximum likelihood, and has some similarities to the maximum likelihood methods of Wei and Tanner.  相似文献   

3.
Linear transformation models, which have been extensively studied in survival analysis, include the two special cases: the proportional hazards model and the proportional odds model. Nonparametric maximum likelihood estimation is usually used to derive the efficient estimators. However, due to the large number of nuisance parameters, calculation of the nonparametric maximum likelihood estimator is difficult in practice, except for the proportional hazards model. We propose an efficient algorithm for computing the maximum likelihood estimates, where the dimensionality of the parameter space is dramatically reduced so that only a finite number of equations need to be solved. Moreover, the asymptotic variance is automatically estimated in the computing procedure. Extensive simulation studies indicate that the proposed algorithm works very well for linear transformation models. A real example is presented for an illustration of the new methodology.  相似文献   

4.
偏t正态分布是分析尖峰,厚尾数据的重要统计工具之一.研究提出了偏t正态数据下混合线性联合位置与尺度模型,通过EM算法和Newton-Raphson方法研究了该模型参数的极大似然估计.并通过随机模拟试验验证了所提出方法的有效性.最后,结合实际数据验证了该模型和方法具有实用性和可行性.  相似文献   

5.
本文研究缺失数据下对数线性模型参数的极大似然估计问题.通过Monte-Carlo EM算法去拟合所提出的模型.其中,在期望步中利用Metropolis-Hastings算法产生一个缺失数据的样本,在最大化步中利用Newton-Raphson迭代使似然函数最大化.最后,利用观测数据的Fisher信息得到参数极大似然估计的渐近方差和标准误差.  相似文献   

6.
讨论了如何运用拟蒙特卡罗方法对二项线性随机效应模型进行参数估计.首先写出观测数据的边缘对数似然函数,然后用拟蒙特卡罗方法将函数中的积分写成求和的形式,接着利用Newton-Raphson算法计算参数的极大似然估计.以一组种子数据为例,说明该方法是简单可行的.  相似文献   

7.
In this paper an implementation is discussed of a modified CANDECOMP algorithm for fitting Lazarsfeld's latent class model. The CANDECOMP algorithm is modified such that the resulting parameter estimates are non-negative and ‘best asymptotically normal’. In order to achieve this, the modified CANDECOMP algorithm minimizes a weighted least squares function instead of an unweighted least squares function as the traditional CANDECOMP algorithm does. To evaluate the new procedure, the modified CANDECOMP procedure with different weighting schemes is compared on five published data sets with the widely-used iterative proportional fitting procedure for obtaining maximum likelihood estimates of the parameters in the latent class model. It is found that, with appropriate weights, the modified CANDECOMP algorithm yields solutions that are nearly identical with those obtained by means of the maximum likelihood procedure. While the modified CANDECOMP algorithm tends to be computationally more intensive than the maximum likelihood method, it is very flexible in that it easily allows one to try out different weighting schemes.  相似文献   

8.
There exists an overall negative assessment of the performance of the simulated maximum likelihood algorithm in the statistics literature, founded on both theoretical and empirical results. At the same time, there also exist a number of highly successful applications. This paper explains the negative assessment by the coupling of the algorithm with “simple importance samplers”, samplers that are not explicitly parameter dependent. The successful applications in the literature are based on explicitly parameter dependent importance samplers. Simple importance samplers may efficiently simulate the likelihood function value, but fail to efficiently simulate the score function, which is the key to efficient simulated maximum likelihood. The theoretical points are illustrated by applying Laplace importance sampling in both variants to the classic salamander mating model.  相似文献   

9.
Abstract

Maximum pseudo-likelihood estimation has hitherto been viewed as a practical but flawed alternative to maximum likelihood estimation, necessary because the maximum likelihood estimator is too hard to compute, but flawed because of its inefficiency when the spatial interactions are strong. We demonstrate that a single Newton-Raphson step starting from the maximum pseudo-likelihood estimator produces an estimator which is close to the maximum likelihood estimator in terms of its actual value, attained likelihood, and efficiency, even in the presence of strong interactions. This hybrid technique greatly increases the practical applicability of pseudo-likelihood-based estimation. Additionally, in the case of the spatial point processes, we propose a proper maximum pseudo-likelihood estimator which is different from the conventional one. The proper maximum pseudo-likelihood estimator clearly shows better performance than the conventional one does when the spatial interactions are strong.  相似文献   

10.
研究了适用于航空兵场站可修装备在多级备件、(s-1,s)库存下的广义维修过程解析模型。基于Monte Carlo算法,迭代产生大量样本数据,经过拟合发现该维修过程依分布收敛于一对数正态分布;再针对该样本分别以OLS(最小二乘),ML(极大似然)估计进行参数推断,得到了其稳态分布函数,通过了拟合优度检验。最后解出了该情况下装备的稳态维修度,稳态可用度等参数。对比simlox模型对该装备的评估结果,数据吻合程度较为理想。  相似文献   

11.
为保证电网安全稳定运行,在大规模风电并网运行控制过程中,准确构建风电出力波动特性的概率分布模型具有重要意义.基于数据驱动的方法,采用加权高斯混合概率分布模型来拟合大规模风电基地的波动特性,模型拟合参数可采用基于期望最大化(Expectation Maximization,EM)的极大似然估计算法来获得,并提出了拟合评价指标来与其它多种概率分布模型进行对比,结果验证了加权高斯混合概率模型的有效性和适用性.  相似文献   

12.
We propose a model selection algorithm for high-dimensional clustered data. Our algorithm combines a classical penalized likelihood method with a composite likelihood approach in the framework of colored graphical Gaussian models. Our method is designed to identify high-dimensional dense networks with a large number of edges but sparse edge classes. Its empirical performance is demonstrated through simulation studies and a network analysis of a gene expression dataset.  相似文献   

13.
Hidden Markov models are used as tools for pattern recognition in a number of areas, ranging from speech processing to biological sequence analysis. Profile hidden Markov models represent a class of so-called “left–right” models that have an architecture that is specifically relevant to classification of proteins into structural families based on their amino acid sequences. Standard learning methods for such models employ a variety of heuristics applied to the expectation-maximization implementation of the maximum likelihood estimation procedure in order to find the global maximum of the likelihood function. Here, we compare maximum likelihood estimation to fully Bayesian estimation of parameters for profile hidden Markov models with a small number of parameters. We find that, relative to maximum likelihood methods, Bayesian methods assign higher scores to data sequences that are distantly related to the pattern consensus, show better performance in classifying these sequences correctly, and continue to perform robustly with regard to misspecification of the number of model parameters. Though our study is limited in scope, we expect our results to remain relevant for models with a large number of parameters and other types of left–right hidden Markov models.  相似文献   

14.
In this article, I propose an effcient algorithm to compute ?1 regularized maximum likelihood estimates in the Gaussian graphical model. These estimators, recently proposed in an earlier article by Yuan and Lin, conduct parameter estimation and model selection simultaneously and have been shown to enjoy nice properties in both large and finite samples. To compute the estimates, however, can be very challenging in practice because of the high dimensionality and positive definiteness constraint on the covariance matrix. Taking advantage of the recent advance in semidefinite programming, Yuan and Lin suggested a sophisticated interior-point algorithm to solve the optimization problem. Although a polynomial time algorithm, the optimization technique is known not to be scalable for high-dimensional problems. Alternatively, this article shows that the estimates can be computed by iteratively solving a sequence of ?1 regularized quadratic programs. By effectively exploiting the sparsity of the graphical structure, I propose a new algorithm that can be applied to problems of larger scale. When combined with a path-following strategy, the new algorithm can be used to efficiently approximate the entire solution path of the ?1 regularized maximum likelihood estimates, which also facilitates the choice of tuning parameter. I demonstrate the efficacy and usefulness of the proposed algorithm on a few simulations and real datasets.  相似文献   

15.
We study the class of state-space models and perform maximum likelihood estimation for the model parameters. We consider a stochastic approximation expectation–maximization (SAEM) algorithm to maximize the likelihood function with the novelty of using approximate Bayesian computation (ABC) within SAEM. The task is to provide each iteration of SAEM with a filtered state of the system, and this is achieved using an ABC sampler for the hidden state, based on sequential Monte Carlo methodology. It is shown that the resulting SAEM-ABC algorithm can be calibrated to return accurate inference, and in some situations it can outperform a version of SAEM incorporating the bootstrap filter. Two simulation studies are presented, first a nonlinear Gaussian state-space model then a state-space model having dynamics expressed by a stochastic differential equation. Comparisons with iterated filtering for maximum likelihood inference, and Gibbs sampling and particle marginal methods for Bayesian inference are presented.  相似文献   

16.
Data associated with the linear state-space model can be assembled as a matrix whose Cholesky decomposition leads directly to a likelihood evaluation. It is possible to build several matrices for which this is true. Although the chosen matrix or assemblage can be very large, rows and columns can usually be rearranged so that sparse matrix factorization is feasible and provides an alternative to the Kalman filter. Moreover, technology for calculating derivatives of the log-likelihood using backward differentiation is available, and hence it is possible to maximize the likelihood using the Newton–Raphson approach. Emphasis is given to the estimation of dispersion parameters by both maximum likelihood and restricted maximum likelihood, and an illustration is provided for an ARMA(1,1) model.  相似文献   

17.
This article considers the estimation of parameters of Weibull distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood method under step-stress partially accelerated test model. The maximum likelihood estimates (MLEs) of the unknown parameters are obtained by Newton–Raphson algorithm. Also, the approximate Fisher information matrix is obtained for constructing asymptotic confidence bounds for the model parameters. The biases and mean square errors of the maximum likelihood estimators are computed to assess their performances through a Monte Carlo simulation study.  相似文献   

18.
We will propose a new and practical method for estimating the failure probability of a large number of small to medium scale companies using their balance sheet data. We will use the maximum likelihood method to estimate the best parameters of the logit function, where the failure intensity function in its exponent is represented as a convex quadratic function instead of a commonly used linear function. The reasons for using this type of function are : (i) it can better represent the observed nonlinear dependence of failure probability on financial attributes, (ii) the resulting likelihood function can be maximized using a cutting plane algorithm developed for nonlinear semi-definite programming problems.We will show that we can achieve better prediction performance than the standard logit model, using thousands of sample companies.Revised: December 2002,  相似文献   

19.
A mixture approach to clustering is an important technique in cluster analysis. A mixture of multivariate multinomial distributions is usually used to analyze categorical data with latent class model. The parameter estimation is an important step for a mixture distribution. Described here are four approaches to estimating the parameters of a mixture of multivariate multinomial distributions. The first approach is an extended maximum likelihood (ML) method. The second approach is based on the well-known expectation maximization (EM) algorithm. The third approach is the classification maximum likelihood (CML) algorithm. In this paper, we propose a new approach using the so-called fuzzy class model and then create the fuzzy classification maximum likelihood (FCML) approach for categorical data. The accuracy, robustness and effectiveness of these four types of algorithms for estimating the parameters of multivariate binomial mixtures are compared using real empirical data and samples drawn from the multivariate binomial mixtures of two classes. The results show that the proposed FCML algorithm presents better accuracy, robustness and effectiveness. Overall, the FCML algorithm has the superiority over the ML, EM and CML algorithms. Thus, we recommend FCML as another good tool for estimating the parameters of mixture multivariate multinomial models.  相似文献   

20.
Although generalized linear mixed effects models have received much attention in the statistical literature, there is still no computationally efficient algorithm for computing maximum likelihood estimates for such models when there are a moderate number of random effects. Existing algorithms are either computationally intensive or they compute estimates from an approximate likelihood. Here we propose an algorithm—the spherical–radial algorithm—that is computationally efficient and computes maximum likelihood estimates. Although we concentrate on two-level, generalized linear mixed effects models, the same algorithm can be applied to many other models as well, including nonlinear mixed effects models and frailty models. The computational difficulty for estimation in these models is in integrating the joint distribution of the data and the random effects to obtain the marginal distribution of the data. Our algorithm uses a multidimensional quadrature rule developed in earlier literature to integrate the joint density. This article discusses how this rule may be combined with an optimization algorithm to efficiently compute maximum likelihood estimates. Because of stratification and other aspects of the quadrature rule, the resulting integral estimator has significantly less variance than can be obtained through simple Monte Carlo integration. Computational efficiency is achieved, in part, because relatively few evaluations of the joint density may be required in the numerical integration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号