首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The EM algorithm for mixture problems can be interpreted as a method of coordinate descent on a particular objective function. This view of the iteration partially illuminates the relationship of EM to certain clustering techniques and explains global convergence properties of the algorithm without direct reference to an incomplete data framework.  相似文献   

2.
Here we propose a new class of distributions as a generalized mixture of standard normal and skew normal distributions (GMNSND) and study some of its properties by deriving its characteristic function, mean, variance, coefficient of skewness etc. Further, certain reliability aspects of GMNSND are studied and a location scale extension of GMNSND is considered. The estimation of the parameters of this extended GMNSND by the method of maximum likelihood is discussed.  相似文献   

3.
Finite mixture models represent one of the most popular tools for modeling heterogeneous data. The traditional approach for parameter estimation is based on maximizing the likelihood function. Direct optimization is often troublesome due to the complex likelihood structure. The expectation–maximization algorithm proves to be an effective remedy that alleviates this issue. The solution obtained by this procedure is entirely driven by the choice of starting parameter values. This highlights the importance of an effective initialization strategy. Despite efforts undertaken in this area, there is no uniform winner found and practitioners tend to ignore the issue, often finding misleading or erroneous results. In this paper, we propose a simple yet effective tool for initializing the expectation–maximization algorithm in the mixture modeling setting. The idea is based on model averaging and proves to be efficient in detecting correct solutions even in those cases when competitors perform poorly. The utility of the proposed methodology is shown through comprehensive simulation study and applied to a well-known classification dataset with good results.  相似文献   

4.
Summary Suppose thatH is a mixture of distributions for a given familyF A necessary and sufficient condition is obtained under whichH is, in fact, a finite mixture. An estimator of the number of distributions constituting the mixture is proposed assuming that the mixture is finite and its asymptotic properties are investigated.  相似文献   

5.
Bayes estimation of the mean of a variance mixture of multivariate normal distributions is considered under sum of squared errors loss. We find broad class of priors (also in the variance mixture of normal class) which result in proper and generalized Bayes minimax estimators. This paper extends the results of Strawderman [Minimax estimation of location parameters for certain spherically symmetric distribution, J. Multivariate Anal. 4 (1974) 255-264] in a manner similar to that of Maruyama [Admissible minimax estimators of a mean vector of scale mixtures of multivariate normal distribution, J. Multivariate Anal. 21 (2003) 69-78] but somewhat more in the spirit of Fourdrinier et al. [On the construction of bayes minimax estimators, Ann. Statist. 26 (1998) 660-671] for the normal case, in the sense that we construct classes of priors giving rise to minimaxity. A feature of this paper is that in certain cases we are able to construct proper Bayes minimax estimators satisfying the properties and bounds in Strawderman [Minimax estimation of location parameters for certain spherically symmetric distribution, J. Multivariate Anal. 4 (1974) 255-264]. We also give some insight into why Strawderman's results do or do not seem to apply in certain cases. In cases where it does not apply, we give minimax estimators based on Berger's [Minimax estimation of location vectors for a wide class of densities, Ann. Statist. 3 (1975) 1318-1328] results. A main condition for minimaxity is that the mixing distributions of the sampling distribution and the prior distribution satisfy a monotone likelihood ratio property with respect to a scale parameter.  相似文献   

6.
An estimator of the number of components of a finite mixture ofk-dimensional distributions is given on the basis of a one-dimensional independent random sample obtained by a transformation of ak-dimensional independent random sample. A consistency of the estimator is shown. Some simulation results are given in a case of finite mixtures of two-dimensional normal distributions.  相似文献   

7.
Fisher's method of maximum likelihood breaks down when applied to the problem of estimating the five parameters of a mixture of two normal densities from a continuous random sample of size n. Alternative methods based on minimum-distance estimation by grouping the underlying variable are proposed. Simulation results compare the efficiency as well as the robustness under symmetric departures from component normality of these estimators. Our results indicate that the estimator based on Rao's divergence is better than other classic ones.  相似文献   

8.
9.
Nonlinear principal components are defined for normal random vectors. Their properties are investigated and interpreted in terms of the classical linear principal component analysis. A characterization theorem is proven. All these results are employed to give a unitary interpretation to several different issues concerning the Chernoff–Poincaré type inequalities and their applications to the characterization of normal distributions.  相似文献   

10.
The main purpose of this paper is to establish a result giving the number of intermediary rings between R and S when (R,S) is a normal pair of rings and to provide an algorithm to compute this number.  相似文献   

11.
By Stochastic simulations we discuss the fitness of a mixture normal distribution to observations from general mixture distributions using the MLE method and the EM algorithm. We calculate the probability of misclassifying objects and estimate the optimal number of mixture components with mutual information measure.  相似文献   

12.
The paper is devoted to the problem of statistical estimation of a multivariate distribution density, which is a discrete mixture of Gaussian distributions. A heuristic approach is considered, based on the use of the EM algorithm and nonparametric density estimation with a sequential increase in the number of components of the mixture. Criteria for testing of model adequacy are discussed.  相似文献   

13.
14.
The expectation–maximization (EM) algorithm is a very general and popular iterative computational algorithm to find maximum likelihood estimates from incomplete data and broadly used to statistical analysis with missing data, because of its stability, flexibility and simplicity. However, it is often criticized that the convergence of the EM algorithm is slow. The various algorithms to accelerate the convergence of the EM algorithm have been proposed. The vector ε algorithm of Wynn (Math Comp 16:301–322, 1962) is used to accelerate the convergence of the EM algorithm in Kuroda and Sakakihara (Comput Stat Data Anal 51:1549–1561, 2006). In this paper, we provide the theoretical evaluation of the convergence of the ε-accelerated EM algorithm. The ε-accelerated EM algorithm does not use the information matrix but only uses the sequence of estimates obtained from iterations of the EM algorithm, and thus it keeps the flexibility and simplicity of the EM algorithm.  相似文献   

15.
This paper deals with the unsupervised classification of univariate observations. Given a set of observations originating from a K-component mixture, we focus on the estimation of the component expectations. We propose an algorithm based on the minimization of the “K-product” (KP) criterion we introduced in a previous work. We show that the global minimum of this criterion can be reached by first solving a linear system then calculating the roots of some polynomial of order K. The KP global minimum provides a first raw estimate of the component expectations, then a nearest-neighbour classification enables to refine this estimation. Our method’s relevance is finally illustrated through simulations of various mixtures. When the mixture components do not strongly overlap, the KP algorithm provides better estimates than the Expectation-Maximization algorithm.  相似文献   

16.
Translated from:Problemy Ustoichivosti Stokhasticheskikh Modelei, Trudy Seminara, 1989, pp. 154–163.  相似文献   

17.
The Expectation-Maximization (EM) algorithm is widely used also in industry for parameter estimation within a Maximum Likelihood (ML) framework in case of missing data. It is well-known that EM shows good convergence in several cases of practical interest. To the best of our knowledge, results showing under which conditions EM converges fast are only available for specific cases. In this paper, we analyze the connection of the EM algorithm to other ascent methods as well as the convergence rates of the EM algorithm in general including also nonlinear models and apply this to the PMHT model. We compare the EM with other known iterative schemes such as gradient and Newton-type methods. It is shown that EM reaches Newton-convergence in case of well-separated objects and a Newton-EM combination turns out to be robust and efficient even in cases of closely-spaced targets.  相似文献   

18.
We consider a random graph constructed by the configuration model with the degrees of vertices distributed identically and independently according to the law P(ξ≥k), k = 1, 2, …, with τ ∈ (1, 2). Connections between vertices are then equiprobably formed in compliance with their degrees. This model admits multiple edges and loops. We study the number of loops of a vertex with given degree d and its limiting behavior for different values of d as the number N of vertices grows. Depending on d = d(N), four different limit distributions appear: Poisson distribution, normal distribution, convolution of normal and stable distributions, and stable distribution. We also find the asymptotics of the mean number of loops in the graph.  相似文献   

19.
Advances in Data Analysis and Classification - In statistical analysis, particularly in econometrics, the finite mixture of regression models based on the normality assumption is routinely used to...  相似文献   

20.
A mixture of inverse Gaussian distributions (IGDs) is examined as a model for the lifetime of components. The components differ in one of three ways: in their initial quality, rate of wear, or variability of wear. These three cases are well represented by the parameters of the IGD model. The mechanistic interpretation of the IGD as the first passage time of Brownian motion with positive drift is adopted. The parameters considered are either dichotomous or continuous random variables. Parameter estimation is also examined for these two cases. The model seems to be most appropriate when the single IGD model fails due to heterogeneity of the initial component quality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号