首页 | 本学科首页   官方微博 | 高级检索  
     


Estimation, prediction and the Stein phenomenon under divergence loss
Authors:Malay Ghosh  Victor Mergel  Gauri Sankar Datta
Affiliation:Department of Statistics, University of Florida, Gainesville, FL 32611-8545, United States Department of Statistics, University of Georgia, Athens, GA 30602-1952, United States
Abstract:We consider two problems: (1) estimate a normal mean under a general divergence loss introduced in [S. Amari, Differential geometry of curved exponential families — curvatures and information loss, Ann. Statist. 10 (1982) 357-387] and [N. Cressie, T.R.C. Read, Multinomial goodness-of-fit tests, J. Roy. Statist. Soc. Ser. B. 46 (1984) 440-464] and (2) find a predictive density of a new observation drawn independently of observations sampled from a normal distribution with the same mean but possibly with a different variance under the same loss. The general divergence loss includes as special cases both the Kullback-Leibler and Bhattacharyya-Hellinger losses. The sample mean, which is a Bayes estimator of the population mean under this loss and the improper uniform prior, is shown to be minimax in any arbitrary dimension. A counterpart of this result for predictive density is also proved in any arbitrary dimension. The admissibility of these rules holds in one dimension, and we conjecture that the result is true in two dimensions as well. However, the general Baranchick [A.J. Baranchick, a family of minimax estimators of the mean of a multivariate normal distribution, Ann. Math. Statist. 41 (1970) 642-645] class of estimators, which includes the James-Stein estimator and the Strawderman [W.E. Strawderman, Proper Bayes minimax estimators of the multivariate normal mean, Ann. Math. Statist. 42 (1971) 385-388] class of estimators, dominates the sample mean in three or higher dimensions for the estimation problem. An analogous class of predictive densities is defined and any member of this class is shown to dominate the predictive density corresponding to a uniform prior in three or higher dimensions. For the prediction problem, in the special case of Kullback-Leibler loss, our results complement to a certain extent some of the recent important work of Komaki [F. Komaki, A shrinkage predictive distribution for multivariate normal observations, Biometrika 88 (2001) 859-864] and George, Liang and Xu [E.I. George, F. Liang, X. Xu, Improved minimax predictive densities under Kullbak-Leibler loss, Ann. Statist. 34 (2006) 78-92]. While our proposed approach produces a general class of predictive densities (not necessarily Bayes, but not excluding Bayes predictors) dominating the predictive density under a uniform prior. We show also that various modifications of the James-Stein estimator continue to dominate the sample mean, and by the duality of estimation and predictive density results which we will show, similar results continue to hold for the prediction problem as well.
Keywords:62C15   62C20   62C12
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号