首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
Maximum likelihood methods are important for system modeling and parameter estimation. This paper derives a recursive maximum likelihood least squares identification algorithm for systems with autoregressive moving average noises, based on the maximum likelihood principle. In this derivation, we prove that the maximum of the likelihood function is equivalent to minimizing the least squares cost function. The proposed algorithm is different from the corresponding generalized extended least squares algorithm. The simulation test shows that the proposed algorithm has a higher estimation accuracy than the recursive generalized extended least squares algorithm.  相似文献   

2.
In this paper,a semiparametric two-sample density ratio model is considered and the empirical likelihood method is applied to obtain the parameters estimation.A commonly occurring problem in computing is that the empirical likelihood function may be a concaveconvex function.Here a simple Lagrange saddle point algorithm is presented for computing the saddle point of the empirical likelihood function when the Lagrange multiplier has no explicit solution.So we can obtain the maximum empirical likelihood estimation (MELE) of parameters.Monte Carlo simulations are presented to illustrate the Lagrange saddle point algorithm.  相似文献   

3.
In this paper,a semiparametric two-sample density ratio model is considered and the empirical likelihood method is applied to obtain the parameters estimation.A commonly occurring problem in computing is that the empirical likelihood function may be a concaveconvex function.Here a simple Lagrange saddle point algorithm is presented for computing the saddle point of the empirical likelihood function when the Lagrange multiplier has no explicit solution.So we can obtain the maximum empirical likelihood estimation (MELE) of parameters.Monte Carlo simulations are presented to illustrate the Lagrange saddle point algorithm.  相似文献   

4.
The semiparametric proportional odds model for survival data is useful when mortality rates of different groups converge over time. However, fitting the model by maximum likelihood proves computationally cumbersome for large datasets because the number of parameters exceeds the number of uncensored observations. We present here an alternative to the standard Newton-Raphson method of maximum likelihood estimation. Our algorithm, an example of a minorization-maximization (MM) algorithm, is guaranteed to converge to the maximum likelihood estimate whenever it exists. For large problems, both the algorithm and its quasi-Newton accelerated counterpart outperform Newton-Raphson by more than two orders of magnitude.  相似文献   

5.
Maximum likelihood estimation of the multivariatetdistribution, especially with unknown degrees of freedom, has been an interesting topic in the development of the EM algorithm. After a brief review of the EM algorithm and its application to finding the maximum likelihood estimates of the parameters of thetdistribution, this paper provides new versions of the ECME algorithm for maximum likelihood estimation of the multivariatetdistribution from data with possibly missing values. The results show that the new versions of the ECME algorithm converge faster than the previous procedures. Most important, the idea of this new implementation is quite general and useful for the development of the EM algorithm. Comparisons of different methods based on two datasets are presented.  相似文献   

6.
When the true mixing density is known to be continuous, the maximum likelihood estimate of the mixing density does not provide a satisfying answer due to its degeneracy. Estimation of mixing densities is a well-known ill-posed indirect problem. In this article, we propose to estimate the mixing density by maximizing a penalized likelihood and call the resulting estimate the nonparametric maximum penalized likelihood estimate (NPMPLE). Using theory and methods from the calculus of variations and differential equations, a new functional EM algorithm is derived for computing the NPMPLE of the mixing density. In the algorithm, maximizers in M-steps are found by solving an ordinary differential equation with boundary conditions numerically. Simulation studies show the algorithm outperforms other existing methods such as the popular EMS algorithm. Some theoretical properties of the NPMPLE and the algorithm are also discussed. Computer code used in this article is available online.  相似文献   

7.
In this paper an implementation is discussed of a modified CANDECOMP algorithm for fitting Lazarsfeld's latent class model. The CANDECOMP algorithm is modified such that the resulting parameter estimates are non-negative and ‘best asymptotically normal’. In order to achieve this, the modified CANDECOMP algorithm minimizes a weighted least squares function instead of an unweighted least squares function as the traditional CANDECOMP algorithm does. To evaluate the new procedure, the modified CANDECOMP procedure with different weighting schemes is compared on five published data sets with the widely-used iterative proportional fitting procedure for obtaining maximum likelihood estimates of the parameters in the latent class model. It is found that, with appropriate weights, the modified CANDECOMP algorithm yields solutions that are nearly identical with those obtained by means of the maximum likelihood procedure. While the modified CANDECOMP algorithm tends to be computationally more intensive than the maximum likelihood method, it is very flexible in that it easily allows one to try out different weighting schemes.  相似文献   

8.
Joint latent class modeling of disease prevalence and high-dimensional semicontinuous biomarker data has been proposed to study the relationship between diseases and their related biomarkers. However, statistical inference of the joint latent class modeling approach has proved very challenging due to its computational complexity in seeking maximum likelihood estimates. In this article, we propose a series of composite likelihoods for maximum composite likelihood estimation, as well as an enhanced Monte Carlo expectation–maximization (MCEM) algorithm for maximum likelihood estimation, in the context of joint latent class models. Theoretically, the maximum composite likelihood estimates are consistent and asymptotically normal. Numerically, we have shown that, as compared to the MCEM algorithm that maximizes the full likelihood, not only the composite likelihood approach that is coupled with the quasi-Newton method can substantially reduce the computational complexity and duration, but it can simultaneously retain comparative estimation efficiency.  相似文献   

9.
The EM algorithm is a widely used methodology for penalized likelihood estimation. Provable monotonicity and convergence are the hallmarks of the EM algorithm and these properties are well established for smooth likelihood and smooth penalty functions. However, many relaxed versions of variable selection penalties are not smooth. In this paper, we introduce a new class of space alternating penalized Kullback proximal extensions of the EM algorithm for nonsmooth likelihood inference. We show that the cluster points of the new method are stationary points even when they lie on the boundary of the parameter set. We illustrate the new class of algorithms for the problems of model selection for finite mixtures of regression and of sparse image reconstruction.  相似文献   

10.
讨论了如何运用拟蒙特卡罗方法对二项线性随机效应模型进行参数估计.首先写出观测数据的边缘对数似然函数,然后用拟蒙特卡罗方法将函数中的积分写成求和的形式,接着利用Newton-Raphson算法计算参数的极大似然估计.以一组种子数据为例,说明该方法是简单可行的.  相似文献   

11.
We propose a model selection algorithm for high-dimensional clustered data. Our algorithm combines a classical penalized likelihood method with a composite likelihood approach in the framework of colored graphical Gaussian models. Our method is designed to identify high-dimensional dense networks with a large number of edges but sparse edge classes. Its empirical performance is demonstrated through simulation studies and a network analysis of a gene expression dataset.  相似文献   

12.
The zeta distribution with regression parameters has been rarely used in statistics because of the difficulty of estimating the parameters by traditional maximum likelihood. We propose an alternative method for estimating the parameters based on an iteratively reweighted least-squares algorithm. The quadratic distance estimator (QDE) obtained is consistent, asymptotically unbiased and normally distributed; the estimate can also serve as the initial value required by an algorithm to maximize the likelihood function. We illustrate the method with a numerical example from the insurance literature; we compare the values of the estimates obtained by the quadratic distance and maximum likelihood methods and their approximate variance–covariance matrix. Finally, we calculate the bias, variance and the asymptotic efficiency of the QDE compared to the maximum likelihood estimator (MLE) for some values of the parameters.  相似文献   

13.
We study a modification of the EMS algorithm in which each step of the EMS algorithm is preceded by a nonlinear smoothing step of the form , where S is the smoothing operator of the EMS algorithm. In the context of positive integral equations (à la positron emission tomography) the resulting algorithm is related to a convex minimization problem which always admits a unique smooth solution, in contrast to the unmodified maximum likelihood setup. The new algorithm has slightly stronger monotonicity properties than the original EM algorithm. This suggests that the modified EMS algorithm is actually an EM algorithm for the modified problem. The existence of a smooth solution to the modified maximum likelihood problem and the monotonicity together imply the strong convergence of the new algorithm. We also present some simulation results for the integral equation of stereology, which suggests that the new algorithm behaves roughly like the EMS algorithm. Accepted 1 April 1997  相似文献   

14.
One of the most powerful algorithms for obtaining maximum likelihood estimates for many incomplete-data problems is the EM algorithm. However, when the parameters satisfy a set of nonlinear restrictions, It is difficult to apply the EM algorithm directly. In this paper,we propose an asymptotic maximum likelihood estimation procedure under a set of nonlinear inequalities restrictions on the parameters, in which the EM algorithm can be used. Essentially this kind of estimation problem is a stochastic optimization problem in the M-step. We make use of methods in stochastic optimization to overcome the difficulty caused by nonlinearity in the given constraints.  相似文献   

15.
For semiparametric survival models with interval-censored data and a cure fraction, it is often difficult to derive nonparametric maximum likelihood estimation due to the challenge in maximizing the complex likelihood function. In this article, we propose a computationally efficient EM algorithm, facilitated by a gamma-Poisson data augmentation, for maximum likelihood estimation in a class of generalized odds rate mixture cure (GORMC) models with interval-censored data. The gamma-Poisson data augmentation greatly simplifies the EM estimation and enhances the convergence speed of the EM algorithm. The empirical properties of the proposed method are examined through extensive simulation studies and compared with numerical maximum likelihood estimates. An R package “GORCure” is developed to implement the proposed method and its use is illustrated by an application to the Aerobic Center Longitudinal Study dataset. Supplementary material for this article is available online.  相似文献   

16.
Summary This paper presents a maximum likelihood estimation method for imperfectly observed Gibbsian fields on a finite lattice. This method is an adaptation of the algorithm given in Younes [28]. Presentation of the new algorithm is followed by a theorem about the limit of the second derivative of the likelihood when the lattice increases, which is related to convergence of the method. Some practical remarks about the implementation of the procedure are eventually given.  相似文献   

17.
Although generalized linear mixed effects models have received much attention in the statistical literature, there is still no computationally efficient algorithm for computing maximum likelihood estimates for such models when there are a moderate number of random effects. Existing algorithms are either computationally intensive or they compute estimates from an approximate likelihood. Here we propose an algorithm—the spherical–radial algorithm—that is computationally efficient and computes maximum likelihood estimates. Although we concentrate on two-level, generalized linear mixed effects models, the same algorithm can be applied to many other models as well, including nonlinear mixed effects models and frailty models. The computational difficulty for estimation in these models is in integrating the joint distribution of the data and the random effects to obtain the marginal distribution of the data. Our algorithm uses a multidimensional quadrature rule developed in earlier literature to integrate the joint density. This article discusses how this rule may be combined with an optimization algorithm to efficiently compute maximum likelihood estimates. Because of stratification and other aspects of the quadrature rule, the resulting integral estimator has significantly less variance than can be obtained through simple Monte Carlo integration. Computational efficiency is achieved, in part, because relatively few evaluations of the joint density may be required in the numerical integration.  相似文献   

18.
The complexity of the Metropolis–Hastings (MH) algorithm arises from the requirement of a likelihood evaluation for the full dataset in each iteration. One solution has been proposed to speed up the algorithm by a delayed acceptance approach where the acceptance decision proceeds in two stages. In the first stage, an estimate of the likelihood based on a random subsample determines if it is likely that the draw will be accepted and, if so, the second stage uses the full data likelihood to decide upon final acceptance. Evaluating the full data likelihood is thus avoided for draws that are unlikely to be accepted. We propose a more precise likelihood estimator that incorporates auxiliary information about the full data likelihood while only operating on a sparse set of the data. We prove that the resulting delayed acceptance MH is more efficient. The caveat of this approach is that the full dataset needs to be evaluated in the second stage. We therefore propose to substitute this evaluation by an estimate and construct a state-dependent approximation thereof to use in the first stage. This results in an algorithm that (i) can use a smaller subsample m by leveraging on recent advances in Pseudo-Marginal MH (PMMH) and (ii) is provably within O(m? 2) of the true posterior.  相似文献   

19.
先给出了广义逆指数分布在双边定时截尾样本下形状参数的最大似然估计,并不能得到估计的显式表达式,但证明了参数在(0,+∞)上最大似然估计是唯一存在的.其次提出用EM算法求出形状参数的估计且该估计具有良好的收敛性,还给出了形状参数的EM估计的渐近方差和近似置信区间;最后通过数值模拟,对形状参数的最大似然估计和EM估计的效果进行了比较,说明了用EM算法求形状参数的估计是可行的,并且模拟效果相对比较好.  相似文献   

20.
Latent trait models such as item response theory (IRT) hypothesize a functional relationship between an unobservable, or latent, variable and an observable outcome variable. In educational measurement, a discrete item response is usually the observable outcome variable, and the latent variable is associated with an examinee’s trait level (e.g., skill, proficiency). The link between the two variables is called an item response function. This function, defined by a set of item parameters, models the probability of observing a given item response, conditional on a specific trait level. Typically in a measurement setting, neither the item parameters nor the trait levels are known, and so must be estimated from the pattern of observed item responses. Although a maximum likelihood approach can be taken in estimating these parameters, it usually cannot be employed directly. Instead, a method of marginal maximum likelihood (MML) is utilized, via the expectation-maximization (EM) algorithm. Alternating between an expectation (E) step and a maximization (M) step, the EM algorithm assures that the marginal log likelihood function will not decrease after each EM cycle, and will converge to a local maximum. Interestingly, the negative of this marginal log likelihood function is equal to the relative entropy, or Kullback-Leibler divergence, between the conditional distribution of the latent variables given the observable variables and the joint likelihood of the latent and observable variables. With an unconstrained optimization for the M-step proposed here, the EM algorithm as minimization of Kullback-Leibler divergence admits the convergence results due to Csiszár and Tusnády (Statistics & Decisions, 1:205–237, 1984), a consequence of the binomial likelihood common to latent trait models with dichotomous response variables. For this unconstrained optimization, the EM algorithm converges to a global maximum of the marginal log likelihood function, yielding an information bound that permits a fixed point of reference against which models may be tested. A likelihood ratio test between marginal log likelihood functions obtained through constrained and unconstrained M-steps is provided as a means for testing models against this bound. Empirical examples demonstrate the approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号