首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 576 毫秒
1.
In this paper, we study the problem of estimating a multivariate normal covariance matrix with staircase pattern data. Two kinds of parameterizations in terms of the covariance matrix are used. One is Cholesky decomposition and another is Bartlett decomposition. Based on Cholesky decomposition of the covariance matrix, the closed form of the maximum likelihood estimator (MLE) of the covariance matrix is given. Using Bayesian method, we prove that the best equivariant estimator of the covariance matrix with respect to the special group related to Cholesky decomposition uniquely exists under the Stein loss. Consequently, the MLE of the covariance matrix is inadmissible under the Stein loss. Our method can also be applied to other invariant loss functions like the entropy loss and the symmetric loss. In addition, based on Bartlett decomposition of the covariance matrix, the Jeffreys prior and the reference prior of the covariance matrix with staircase pattern data are also obtained. Our reference prior is different from Berger and Yang’s reference prior. Interestingly, the Jeffreys prior with staircase pattern data is the same as that with complete data. The posterior properties are also investigated. Some simulation results are given for illustration.  相似文献   

2.
In this paper, we study the problem of estimating the covariance matrix Σ and the precision matrix Ω (the inverse of the covariance matrix) in a star-shape model with missing data. By considering a type of Cholesky decomposition of the precision matrix Ω=ΨΨ, where Ψ is a lower triangular matrix with positive diagonal elements, we get the MLEs of the covariance matrix and precision matrix and prove that both of them are biased. Based on the MLEs, unbiased estimators of the covariance matrix and precision matrix are obtained. A special group G, which is a subgroup of the group consisting all lower triangular matrices, is introduced. By choosing the left invariant Haar measure on G as a prior, we obtain the closed forms of the best equivariant estimates of Ω under any of the Stein loss, the entropy loss, and the symmetric loss. Consequently, the MLE of the precision matrix (covariance matrix) is inadmissible under any of the above three loss functions. Some simulation results are given for illustration.  相似文献   

3.
When missing data are either missing completely at random (MCAR) or missing at random (MAR), the maximum likelihood (ML) estimation procedure preserves many of its properties. However, in any statistical modeling, the distribution specification for the likelihood function is at best only an approximation to the real world. In particular, since the normal-distribution-based ML is typically applied to data with heterogeneous marginal skewness and kurtosis, it is necessary to know whether such a practice still generates consistent parameter estimates. When the manifest variables are linear combinations of independent random components and missing data are MAR, this paper shows that the normal-distribution-based MLE is consistent regardless of the distribution of the sample. Examples also show that the consistency of the MLE is not guaranteed for all nonnormally distributed samples. When the population follows a confirmatory factor model, and data are missing due to the magnitude of the factors, the MLE may not be consistent even when data are normally distributed. When data are missing due to the magnitude of measurement errors/uniqueness, MLEs for many of the covariance parameters related to the missing variables are still consistent. This paper also identifies and discusses the factors that affect the asymptotic biases of the MLE when data are not missing at random. In addition, the paper also shows that, under certain data models and MAR mechanism, the MLE is asymptotically normally distributed and the asymptotic covariance matrix is consistently estimated by the commonly used sandwich-type covariance matrix. The results indicate that certain formulas and/or conclusions in the existing literature may not be entirely correct.  相似文献   

4.
This paper addresses the problem of estimating the normal mean matrix in the case of unknown covariance matrix. This problem is solved by considering generalized Bayesian hierarchical models. The resulting generalized Bayes estimators with respect to an invariant quadratic loss function are shown to be matricial shrinkage equivariant estimators and the conditions for their minimaxity are given.  相似文献   

5.
We consider the asymptotic joint distribution of the eigenvalues and eigenvectors of Wishart matrix when the population eigenvalues become infinitely dispersed. We show that the normalized sample eigenvalues and the relevant elements of the sample eigenvectors are asymptotically all mutually independently distributed. The limiting distributions of the normalized sample eigenvalues are chi-squared distributions with varying degrees of freedom and the distribution of the relevant elements of the eigenvectors is the standard normal distribution. As an application of this result, we investigate tail minimaxity in the estimation of the population covariance matrix of Wishart distribution with respect to Stein's loss function and the quadratic loss function. Under mild regularity conditions, we show that the behavior of a broad class of tail minimax estimators is identical when the sample eigenvalues become infinitely dispersed.  相似文献   

6.
For Wishart density functions, there remains a long-time question unsolved. That is whether there exists the closed-form MLEs of mean matrices over the partially Löwner ordering sets. In this note, we provide an affirmative answer by demonstrating a unified procedure on exactly how the closed-form MLEs are obtained for the simple ordering case. Under the Kullback-Leibler loss function, a property of obtained MLEs is further studied. Some applications of the obtained closed-form MLEs, including the comparison between our ML estimates and Calvin and Dykstra's [Maximum likelihood estimation of a set of covariance matrices under Löwner order restrictions with applications to balanced multivariate variance components models, Ann. Statist. 19 (1991) 850-869.] which obtained by iterative algorithm, are also made.  相似文献   

7.
本文我们讨论了多周期Probit模型中MLE的存在性问题,给出了当协方差阵已知时,参数的MLE存在的充要条件;当协方差阵未知但具有序列结构时,参数的MLE存在的一个必要条件和一个充分条件.  相似文献   

8.
We consider a new method for sparse covariance matrix estimation which is motivated by previous results for the so-called Stein-type estimators. Stein proposed a method for regularizing the sample covariance matrix by shrinking together the eigenvalues; the amount of shrinkage is chosen to minimize an unbiased estimate of the risk (UBEOR) under the entropy loss function. The resulting estimator has been shown in simulations to yield significant risk reductions over the maximum likelihood estimator. Our method extends the UBEOR minimization problem by adding an ?1 penalty on the entries of the estimated covariance matrix, which encourages a sparse estimate. For a multivariate Gaussian distribution, zeros in the covariance matrix correspond to marginal independences between variables. Unlike the ?1-penalized Gaussian likelihood function, our penalized UBEOR objective is convex and can be minimized via a simple block coordinate descent procedure. We demonstrate via numerical simulations and an analysis of microarray data from breast cancer patients that our proposed method generally outperforms other methods for sparse covariance matrix estimation and can be computed efficiently even in high dimensions.  相似文献   

9.
一种Sieve极大似然估计的渐近性质   总被引:2,自引:0,他引:2  
该文针对部分线性模型,在响应变量的观测值为Ⅰ型区间删失数据的情形下,讨论Sieve极大似然估计的渐近性质.用三角级数来构造Sieve空间,在一定条件下证明了该估计具有强相合性;得到了该估计的弱收敛速度,并且非参数部分的估计达到了最优收敛速度;还算出了参数部分的信息界.  相似文献   

10.
The problem of estimating, under unweighted quadratic loss, the mean of a multinormal random vector X with arbitrary covariance matrix V is considered. The results of James and Stein for the case V = I have since been extended by Bock to cover arbitrary V and also to allow for contracting X towards a subspace other than the origin; minimax estimators (other than X) exist if and only if the eigenvalues of V are not “too spread out.” In this paper a slight variation of Bock's estimator is considered. A necessary and sufficient condition for the minimaxity of the present estimator is (1): the eigenvalues of (I ? P) V should not be “too spread out,” where P denotes the projection matrix associated with the subspace towards which X is contracted. The validity of (1) is then examined for a number of patterned covariance matrices (e.g., intraclass covariance, tridiagonal and first order autocovariance) and conditions are given for (1) to hold when contraction is towards the origin or towards the common mean of the components of X. (1) is also examined when X is the usual estimate of the regression vector in multiple linear regression. In several of the cases considered the eigenvalues of V are “too spread out” while those of (I ? P) V are not, so that in these instances the present method can be used to produce a minimax estimate.  相似文献   

11.
This article is based on the series of lectures given by Prof. Charles Stein of Stanford University at LOMI AN SSSR in the fall of 1976. The first three lectures are concerned with the estimation of the mean vector of a multivariate normal distribution with quadratic loss function. James-Stein estimators are considered and their relation to Bayesian estimators is discussed. The problem of estimating the covariance matrix of the normal distribution and the estimation of the entropy of a multinomial distribution are considered in the following two lectures. The final lecture discusses several problems related to the estimation of multivariate parameters and poses some unsolved problems.Published in Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR, Vol. 74, pp. 4–65, 1977.In conclusion of this series of lectures, I would like to acknowledge the assistance of M. Ermakov, A. Borodin, and A. Makshanov in translating my lectures into Russian.  相似文献   

12.
白鹏  郭海兵 《数学进展》2007,36(5):546-560
对于带Gauss型误差的GMANOVA-MANOVA模型,在均匀协方差结构下,求出了其中未知参数的极大似然估计及其均值和方差,并依据极大似然估计构造了未知参数的精确置信域.  相似文献   

13.
In this paper, we introduce the star-shape models, where the precision matrix Ω (the inverse of the covariance matrix) is structured by the special conditional independence. We want to estimate the precision matrix under entropy loss and symmetric loss. We show that the maximal likelihood estimator (MLE) of the precision matrix is biased. Based on the MLE, an unbiased estimate is obtained. We consider a type of Cholesky decomposition of Ω, in the sense that Ω=Ψ′Ψ, where Ψ is a lower triangular matrix with positive diagonal elements. A special group , which is a subgroup of the group consisting all lower triangular matrices, is introduced. General forms of equivariant estimates of the covariance matrix and precision matrix are obtained. The invariant Haar measures on , the reference prior, and the Jeffreys prior of Ψ are also discussed. We also introduce a class of priors of Ψ, which includes all the priors described above. The posterior properties are discussed and the closed forms of Bayesian estimators are derived under either the entropy loss or the symmetric loss. We also show that the best equivariant estimators with respect to is the special case of Bayesian estimators. Consequently, the MLE of the precision matrix is inadmissible under either entropy or symmetric loss. The closed form of risks of equivariant estimators are obtained. Some numerical results are given for illustration. The project is supported by the National Science Foundation grants DMS-9972598, SES-0095919, and SES-0351523, and a grant from Federal Aid in Wildlife Restoration Project W-13-R through Missouri Department of Conservation.  相似文献   

14.
We consider goodness-of-fit tests of the Cauchy distribution based on weighted integrals of the squared distance between the empirical characteristic function of the standardized data and the characteristic function of the standard Cauchy distribution. For standardization of data Gürtler and Henze (2000,Annals of the Institute of Statistical Mathematics,52, 267–286) used the median and the interquartile range. In this paper we use the maximum likelihood estimator (MLE) and an equivariant integrated squared error estimator (EISE), which minimizes the weighted integral. We derive an explicit form of the asymptotic covariance function of the characteristic function process with parameters estimated by the MLE or the EISE. The eigenvalues of the covariance function are numerically evaluated and the asymptotic distributions of the test statistics are obtained by the residue theorem. A simulation study shows that the proposed tests compare well to tests proposed by Gürtler and Henze and more traditional tests based on the empirical distribution function.  相似文献   

15.
16.
Summary Stein [2] has shown that the maximum likelihood estimator (MLE) of the regression coefficients is admissible in unvariate regression with one predictor or with two predictors and known means. In a similar way it is shown in the present note that the MLE is admissible when there are two predictands and one predictor and the means are known.  相似文献   

17.
Circulant matrix embedding is one of the most popular and efficient methods for the exact generation of Gaussian stationary univariate series. Although the idea of circulant matrix embedding has also been used for the generation of Gaussian stationary random fields, there are many practical covariance structures of random fields where classical embedding methods break down. In this work, we propose a novel methodology that adaptively constructs feasible circulant embeddings based on convex optimization with an objective function measuring the distance of the covariance embedding to the targeted covariance structure over the domain of interest. The optimal value of the objective function will be zero if and only if there exists a feasible embedding for the a priori chosen embedding size.  相似文献   

18.
The decomposition of the Kullback-Leibler risk of the maximum likelihood estimator (MLE) is discussed in relation to the Stein estimator and the conditional MLE. A notable correspondence between the decomposition in terms of the Stein estimator and that in terms of the conditional MLE is observed. This decomposition reflects that of the expected log-likelihood ratio. Accordingly, it is concluded that these modified estimators reduce the risk by reducing the expected log-likelihood ratio. The empirical Bayes method is discussed from this point of view.  相似文献   

19.
Summary In the problem of estimating the covariance matrix of a multivariate normal population, James and Stein (Proc. Fourth Berkeley Symp. Math. Statist. Prob.,1, 361–380, Univ. of California Press) obtained a minimax estimator under a scale invariant loss. In this paper we propose an orthogonally invariant trimmed estimator by solving certain differential inequality involving the eigenvalues of the sample covariance matrix. The estimator obtained, truncates the extreme eigenvalues first and then shrinks the larger and expands the smaller sample eigenvalues. Adaptive version of the trimmed estimator is also discussed. Finally some numerical studies are performed using Monte Carlo simulation method and it is observed that the trimmed estimate shows a substantial improvement over the minimax estimator. The second author's research was supported by NSF Grant Number MCS 82-12968.  相似文献   

20.
Suppose that we have (na) independent observations from Np(0, Σ) and that, in addition, we have a independent observations available on the last (pc) coordinates. Assuming that both observations are independent, we consider the problem of estimating Σ under the Stein′s loss function, and show that some estimators invariant under the permutation of the last (pc) coordinates as well as under those of the first c coordinates are better than the minimax estimators of Eaten. The estimators considered outperform the maximum likelihood estimator (MLE) under the Stein′s loss function as well. The method involved here is computation of an unbiased estimate of the risk of an invariant estimator considered in this article. In addition we discuss its application to the problem of estimating a covariance matrix in a GMANOVA model since the estimation problem of the covariance matrix with extra data can be regarded as its canonical form.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号