共查询到20条相似文献,搜索用时 328 毫秒
1.
2.
对带有随机效应的一般线性模型,本文提出了随机回归系数和参数线性组合的Minimax估计问题. 在二次损失下,研究了线性估计的极小极大性.关于适当的假设,得到了可估函数的唯一线性Mjnimax 估计. 相似文献
3.
Li Wen XU Song Gui WANG 《数学学报(英文版)》2007,23(3):497-506
In this paper, the authors address the problem of the minimax estimator of linear combinations of stochastic regression coefficients and parameters in the general normal linear model with random effects. Under a quadratic loss function, the minimax property of linear estimators is investigated. In the class of all estimators, the minimax estimator of estimable functions, which is unique with probability 1, is obtained under a multivariate normal distribution. 相似文献
4.
Summary In the problem of estimating the covariance matrix of a multivariate normal population, James and Stein (Proc. Fourth Berkeley Symp. Math. Statist. Prob.,1, 361–380, Univ. of California Press) obtained a minimax estimator under a scale invariant loss. In this paper we propose
an orthogonally invariant trimmed estimator by solving certain differential inequality involving the eigenvalues of the sample
covariance matrix. The estimator obtained, truncates the extreme eigenvalues first and then shrinks the larger and expands
the smaller sample eigenvalues. Adaptive version of the trimmed estimator is also discussed. Finally some numerical studies
are performed using Monte Carlo simulation method and it is observed that the trimmed estimate shows a substantial improvement
over the minimax estimator.
The second author's research was supported by NSF Grant Number MCS 82-12968. 相似文献
5.
在二次损失下关于任意矩阵V对G-M模型讨论了齐次线性估计类中可估函数的条件Mimimax估计与性质。 相似文献
6.
Toyoaki Akai 《Annals of the Institute of Statistical Mathematics》1989,41(3):485-502
Simultaneous estimation of normal means is considered for observations which are classified into several groups. In a one-way classification case, it is shown that an adaptive shrinkage estimator dominates a Stein-type estimator which shrinks observations towards individual class averages as Stein's (1966,Festschrift for J. Neyman, (ed. F. N. David), 351–366, Wiley, New York) does, and is minimax even if class sizes are small. Simulation results under quadratic loss show that it is slightly better than Stein's (1966) if between variances are larger than within ones. Further this estimator is shown to improve on Stein's (1966) with respect to the Bayes risk. Our estimator is derived by assuming the means to have a one-way classification structure, consisting of three random terms of grand mean, class mean and residual. This technique can be applied to the case where observations are classified into a two-stage hierarchy. 相似文献
7.
Yuzo Maruyama 《Journal of multivariate analysis》2004,88(2):320-334
We consider estimation of a multivariate normal mean vector under sum of squared error loss.We propose a new class of minimax admissible estimator which are generalized Bayes with respect to a prior distribution which is a mixture of a point prior at the origin and a continuous hierarchical type prior. We also study conditions under which these generalized Bayes minimax estimators improve on the James–Stein estimator and on the positive-part James–Stein estimator. 相似文献
8.
Satoshi Kuriki 《Annals of the Institute of Statistical Mathematics》1993,45(4):731-739
The unbiased estimator of risk of the orthogonally invariant estimator of the skew-symmetric normal mean matrix is obtained, and a class of minimax estimators and their order-preserving modification are proposed. The estimators have applications in paired comparisons model. A Monte Carlo study to compare the risks of the estimators is given. 相似文献
9.
Yuzo Maruyama William E. Strawderman 《Annals of the Institute of Statistical Mathematics》2005,57(1):157-165
This paper develops necessary conditions for an estimator to dominate the James-Stein estimator and hence the James-Stein
positive-part estimator. The ultimate goal is to find classes of such dominating estimators which are admissible. While there
are a number of results giving classes of estimators dominating the James-Stein estimator, the only admissible estimator known
to dominate the James-Stein estimator is the generalized Bayes estimator relative to the fundamental harmonic function in
three and higher dimension. The prior was suggested by Stein and the domination result is due to Kubokawa. Shao and Strawderman
gave a class of estimators dominating the James-Stein positive-part estimator but were unable to demonstrate admissiblity
of any in their class. Maruyama, following a suggestion of Stein, has studied generalized Bayes estimators which are members
of a point mass at zero and a prior similar to the harmonic prior. He finds a subclass which is minimax and admissible but
is unable to show that any in his class with positive point mass at zero dominate the James-Stein estimator. The results in
this paper show that a subclass of Maruyama's procedures including the class that Stein conjectured might contain members
dominating the James-Stein estimator cannot dominate the James-Stein estimator. We also show that under reasonable conditions,
the “constant” in shrinkage factor must approachp-2 for domination to hold. 相似文献
10.
矩阵损失下随机回归系数和参数的线性Minimax估计 总被引:2,自引:0,他引:2
对于一般的随机效应线性模型Y=Xβ+ε,这里β和ε分别是p维和n维的随机向量,且E(βε)=(Aa0),Cov(βε)=σ2(V10
0V2),(Vi≥0,i=1,2)我们定义了Sα+Qβ的线性Minimax估计,在一定条件下得到了Sα+Qβ在线性估计类中的Minimax估计,并在几乎处处意义下证明了它的唯一性. 相似文献
11.
Here we study the problems of local asymptotic normality of the parametric family of distributions and asymptotic minimax efficient estimators when the observations are subject to right censoring. Local asymptotic normality will be established under some mild regularity conditions. A lower bound for local asymptotic minimax risk is given with respect to a bowl-shaped loss function, and furthermore a necessary and sufficient condition is given in order to achieve this lower bound. Finally, we show that this lower bound can be attained by the maximum likelihood estimator in the censored case and hence it is local asymptotic minimax efficient. 相似文献
12.
For the invariant decision problem of estimating a continuous distribution function with the Kolmogorov-Smirnov loss within the class of proper– distribution functions, it is proved that the sample distribution function is the best invariant estimator only for the sample size n = 1 and 2. Further it is shown that the best invariant estimator is minimax. Exact jumps of the best invariant estimator are derived for n 4. 相似文献
13.
14.
该文在一般正态随机效应线性模型中研究了随机回归系数和参数的估计问题. 在二次损失下,得到了线性可估函数在一切估计类中的唯一Minimax估计. 相似文献
15.
Summary The problem is to estimate the mean of the normal distribution under the situation where there is vague information that the
mean might be equal to zero. A minimax property of the preliminary test estimator obtained by the use of AIC (Akaike information
Criterion) procedure is proved under a loss function based on the Kullback-Leibler information measure. 相似文献
16.
Qiqing Yu 《Annals of the Institute of Statistical Mathematics》1992,44(4):729-735
Consider the problems of the continuous invariant estimation of a distribution function with a wide class of loss functions. It has been conjectured for long that the best invariant estimator is minimax for all sample sizes n1. This conjecture is proved in this short note.Partially supported by National Science Foundation Grant DMS 9001194. 相似文献
17.
In this paper we consider the problem of estimating the matrix of regression coefficients in a multivariate linear regression model in which the design matrix is near singular. Under the assumption of normality, we propose empirical Bayes ridge regression estimators with three types of shrinkage functions, that is, scalar, componentwise and matricial shrinkage. These proposed estimators are proved to be uniformly better than the least squares estimator, that is, minimax in terms of risk under the Strawderman's loss function. Through simulation and empirical studies, they are also shown to be useful in the multicollinearity cases. 相似文献
18.
Frank Marohn 《Annals of the Institute of Statistical Mathematics》1997,49(4):645-666
This paper deals with the estimation of the extreme value index in local extreme value models. We establish local asymptotic normality (LAN) under certain extreme value alternatives. It turns out that the central sequence occurring in the LAN expansion of the likelihood process is up to a rescaling procedure the Hill estimator. The central sequence plays a crucial role for the construction of asymptotic optimal statistical procedures. In particular, the Hill estimator is asymptotically minimax. 相似文献
19.
Assume X = (X1, …, Xp)′ is a normal mixture distribution with density w.r.t. Lebesgue measure, , where Σ is a known positive definite matrix and F is any known c.d.f. on (0, ∞). Estimation of the mean vector under an arbitrary known quadratic loss function
Q(θ, a) = (a − θ)′ Q(a − θ), Q a positive definite matrix, is considered. An unbiased estimator of risk is obatined for an arbitrary estimator, and a sufficient condition for estimators to be minimax is then achieved. The result is applied to modifying all the Stein estimators for the means of independent normal random variables to be minimax estimators for the problem considered here. In particular the results apply to the Stein class of limited translation estimators. 相似文献
20.
Qiquing Yu 《Annals of the Institute of Statistical Mathematics》1989,41(3):503-520
Consider both the calssical and some more general invariant decision problems of estimating a continuous distribution function, with the loss function {ie503-1} and a sample of sizen fromF. It is proved that any nonrandomized estimator can be approximated in Lebesgue measure by the more general invariant estimators. Some methods for investigating the finite sample problem are discussed. As an application, a proof that the best invariant estimator is minimax when the sample size is 1 is given. 相似文献