首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
应用Monte Carlo EM加速算法给出了混合指数分布在恒加应力水平下,在定数截尾场合的参数估计问题,并通过模拟试验说明利用Monte Carlo EM加速算法来估计混合指数分布比EM算法更有效,收敛速度更快.  相似文献   

2.
应用Monte Carlo EM(MCEM)算法给出了多层线性模型参数估计的新方法,解决了EM算法用于模型时积分计算困难的问题,并通过数值模拟将方法的估计结果与EM算法的进行比较,验证了方法的有效性和可行性.  相似文献   

3.
本文针对金融、经济、社会科学、环境科学、工程技术和生物医学等研究领域存在的不对称数据,提出偏正态数据下众数回归模型,基于牛顿-拉弗森迭代利用EM算法来估计未知参数。通过Monte Carlo模拟和BMI数据实例分析验证,表明本文所提出方法的有效性,对于偏正态数据众数回归模型的估计效果优于均值回归模型。  相似文献   

4.
为了更好地拟合偏态数据,充分提取偏态数据的信息,针对偏正态数据建立了众数回归模型,并基于Pena距离统计量对众数回归模型进行统计断研究,得到了众数回归模型的Pena距离表达式以及高杠杆异常点的诊断方法.利用EM算法与梯度下降法给出了众数回归模型参数的极大似然估计,根据数据删除模型计算似然距离、Cook距离和Pena距离统计量,绘制诊断统计图.通过Monte Carlo模拟试验和实例分析比较,说明文章提出的方法行之有效,并在一定条件下Pena距离对异常点或强影响点的诊断优于似然距离和Cook距离.  相似文献   

5.
一个求总极值的实现算法及其收敛性   总被引:7,自引:0,他引:7  
1978年,郑权等首先提出了一种用积分─水平集求总极值的方法及用Monte-Carlo随机投点实现的实现其法,其实现算法是否收敛未解决的问题.本文提出一种用数论方法实现的实现算法,并证明了该实现其法是收敛的.初步的数值结果表明,该实现其法是较有效的.  相似文献   

6.
通过添加部分缺失寿命变量数据,得到了删失截断情形下失效率变点模型相对简单的似然函数.讨论了所添加缺失数据变量的概率分布和随机抽样方法.利用Monte Carlo EM算法对未知参数进行了迭代.结合Metropolis-Hastings算法对参数的满条件分布进行了Gibbs抽样,基于Gibbs样本对参数进行估计,详细介绍了MCMC方法的实施步骤.随机模拟试验的结果表明各参数Bayes估计的精度较高.  相似文献   

7.
Monte Carlo方法是期权定价的经典方法之一,但是收敛速度较慢.针对Hull-White随机波动率模型提出一个拟Monte Carlo方法(QMC)与对偶变量法(AV)相结合的QMCAV方法,利用该方法可以处理一些奇异期权的定价问题.应用Monte Carlo方法(MC),拟Monte Carlo方法,对偶变量法和QMCAV方法分别进行数值模拟计算,给出了在不同参数变化下回望期权与亚式期权的模拟定价.数值实验表明,QMCAV方法较MC,QMC,AV方法更加稳定有效.  相似文献   

8.
首先利用模糊结构元方法,将模糊值函数的积分转换成等价的确定函数积分.然后,采用Monte Carlo模拟方法给出确定函数积分的数值解,进而获得原模糊值函数积分的数值解.最后,给出了具体算例.  相似文献   

9.
给出了方差未知时两总体均值比较的近似方法(Welch-Satterthwaite方法),Bayes方法和一个Monte Carlo模拟算法,并用一个算例进行了比较.  相似文献   

10.
龙兵  张忠占 《应用数学》2019,32(2):302-310
本文针对定时截尾试验的弊端提出一个新的寿命试验方案,基于试验数据得到似然函数,运用极大似然法得到尺度参数的点估计.利用EM算法得到了形状参数和加速因子的迭代方程,并根据缺损信息原则计算了Fisher信息矩阵.根据极大似然估计的渐近正态性,推导出参数的渐近置信区间.通过Monte Carlo方法对估计的平均绝对值相对偏差和均方误差进行模拟计算,并讨论了样本量对估计精度的影响.最后通过具体的样本,在不同应力水平下计算出形状参数、加速因子和可靠度的估计.  相似文献   

11.
Two noniterative algorithms for computing posteriors   总被引:1,自引:0,他引:1  
In this paper, we first propose a noniterative sampling method to obtain an i.i.d. sample approximately from posteriors by combining the inverse Bayes formula, sampling/importance resampling and posterior mode estimates. We then propose a new exact algorithm to compute posteriors by improving the PMDA-Exact using the sampling-wise IBF. If the posterior mode is available from the EM algorithm, then these two algorithms compute posteriors well and eliminate the convergence problem of Markov Chain Monte Carlo methods. We show good performances of our methods by some examples.  相似文献   

12.
The stochastic approximation EM (SAEM) algorithm is a simulation-based alternative to the expectation/maximization (EM) algorithm for situations when the E-step is hard or impossible. One of the appeals of SAEM is that, unlike other Monte Carlo versions of EM, it converges with a fixed (and typically small) simulation size. Another appeal is that, in practice, the only decision that has to be made is the choice of the step size which is a one-time decision and which is usually done before starting the method. The downside of SAEM is that there exist no data-driven and/or model-driven recommendations as to the magnitude of this step size. We argue in this article that a challenging model/data combination coupled with an unlucky step size can lead to very poor algorithmic performance and, in particular, to a premature stop of the method. This article proposes a new heuristic for SAEM's step size selection based on the underlying EM rate of convergence. We also use the much-appreciated EM likelihood-ascent property to derive a new and flexible way of monitoring the progress of the SAEM algorithm. The method is applied to a challenging geostatistical model of online retailing.  相似文献   

13.
This article presents new computational techniques for multivariate longitudinal or clustered data with missing values. Current methodology for linear mixed-effects models can accommodate imbalance or missing data in a single response variable, but it cannot handle missing values in multiple responses or additional covariates. Applying a multivariate extension of a popular linear mixed-effects model, we create multiple imputations of missing values for subsequent analyses by a straightforward and effective Markov chain Monte Carlo procedure. We also derive and implement a new EM algorithm for parameter estimation which converges more rapidly than traditional EM algorithms because it does not treat the random effects as “missing data,” but integrates them out of the likelihood function analytically. These techniques are illustrated on models for adolescent alcohol use in a large school-based prevention trial.  相似文献   

14.
本文研究缺失数据下对数线性模型参数的极大似然估计问题.通过Monte-Carlo EM算法去拟合所提出的模型.其中,在期望步中利用Metropolis-Hastings算法产生一个缺失数据的样本,在最大化步中利用Newton-Raphson迭代使似然函数最大化.最后,利用观测数据的Fisher信息得到参数极大似然估计的渐近方差和标准误差.  相似文献   

15.
偏t正态分布是分析尖峰,厚尾数据的重要统计工具之一.研究提出了偏t正态数据下混合线性联合位置与尺度模型,通过EM算法和Newton-Raphson方法研究了该模型参数的极大似然估计.并通过随机模拟试验验证了所提出方法的有效性.最后,结合实际数据验证了该模型和方法具有实用性和可行性.  相似文献   

16.
The family of expectation--maximization (EM) algorithms provides a general approach to fitting flexible models for large and complex data. The expectation (E) step of EM-type algorithms is time-consuming in massive data applications because it requires multiple passes through the full data. We address this problem by proposing an asynchronous and distributed generalization of the EM called the distributed EM (DEM). Using DEM, existing EM-type algorithms are easily extended to massive data settings by exploiting the divide-and-conquer technique and widely available computing power, such as grid computing. The DEM algorithm reserves two groups of computing processes called workers and managers for performing the E step and the maximization step (M step), respectively. The samples are randomly partitioned into a large number of disjoint subsets and are stored on the worker processes. The E step of DEM algorithm is performed in parallel on all the workers, and every worker communicates its results to the managers at the end of local E step. The managers perform the M step after they have received results from a γ-fraction of the workers, where γ is a fixed constant in (0, 1]. The sequence of parameter estimates generated by the DEM algorithm retains the attractive properties of EM: convergence of the sequence of parameter estimates to a local mode and linear global rate of convergence. Across diverse simulations focused on linear mixed-effects models, the DEM algorithm is significantly faster than competing EM-type algorithms while having a similar accuracy. The DEM algorithm maintains its superior empirical performance on a movie ratings database consisting of 10 million ratings. Supplementary material for this article is available online.  相似文献   

17.
We propose a multinomial probit (MNP) model that is defined by a factor analysis model with covariates for analyzing unordered categorical data, and discuss its identification. Some useful MNP models are special cases of the proposed model. To obtain maximum likelihood estimates, we use the EM algorithm with its M-step greatly simplified under Conditional Maximization and its E-step made feasible by Monte Carlo simulation. Standard errors are calculated by inverting a Monte Carlo approximation of the information matrix using Louis’s method. The methodology is illustrated with a simulated data.  相似文献   

18.
Maximum likelihood estimation in finite mixture distributions is typically approached as an incomplete data problem to allow application of the expectation-maximization (EM) algorithm. In its general formulation, the EM algorithm involves the notion of a complete data space, in which the observed measurements and incomplete data are embedded. An advantage is that many difficult estimation problems are facilitated when viewed in this way. One drawback is that the simultaneous update used by standard EM requires overly informative complete data spaces, which leads to slow convergence in some situations. In the incomplete data context, it has been shown that the use of less informative complete data spaces, or equivalently smaller missing data spaces, can lead to faster convergence without sacrifying simplicity. However, in the mixture case, little progress has been made in speeding up EM. In this article we propose a component-wise EM for mixtures. It uses, at each iteration, the smallest admissible missing data space by intrinsically decoupling the parameter updates. Monotonicity is maintained, although the estimated proportions may not sum to one during the course of the iteration. However, we prove that the mixing proportions will satisfy this constraint upon convergence. Our proof of convergence relies on the interpretation of our procedure as a proximal point algorithm. For performance comparison, we consider standard EM as well as two other algorithms based on missing data space reduction, namely the SAGE and AECME algorithms. We provide adaptations of these general procedures to the mixture case. We also consider the ECME algorithm, which is not a data augmentation scheme but still aims at accelerating EM. Our numerical experiments illustrate the advantages of the component-wise EM algorithm relative to these other methods.  相似文献   

19.
Implementations of the Monte Carlo EM Algorithm   总被引:1,自引:0,他引:1  
The Monte Carlo EM (MCEM) algorithm is a modification of the EM algorithm where the expectation in the E-step is computed numerically through Monte Carlo simulations. The most exible and generally applicable approach to obtaining a Monte Carlo sample in each iteration of an MCEM algorithm is through Markov chain Monte Carlo (MCMC) routines such as the Gibbs and Metropolis–Hastings samplers. Although MCMC estimation presents a tractable solution to problems where the E-step is not available in closed form, two issues arise when implementing this MCEM routine: (1) how do we minimize the computational cost in obtaining an MCMC sample? and (2) how do we choose the Monte Carlo sample size? We address the first question through an application of importance sampling whereby samples drawn during previous EM iterations are recycled rather than running an MCMC sampler each MCEM iteration. The second question is addressed through an application of regenerative simulation. We obtain approximate independent and identical samples by subsampling the generated MCMC sample during different renewal periods. Standard central limit theorems may thus be used to gauge Monte Carlo error. In particular, we apply an automated rule for increasing the Monte Carlo sample size when the Monte Carlo error overwhelms the EM estimate at any given iteration. We illustrate our MCEM algorithm through analyses of two datasets fit by generalized linear mixed models. As a part of these applications, we demonstrate the improvement in computational cost and efficiency of our routine over alternative MCEM strategies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号