首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Let X be a p-dimensional random vector with density f(6X?θ6) where θ is an unknown location vector. For p ≥ 3, conditions on f are given for which there exist minimax estimators θ?(X) satisfying 6Xt6 · 6θ?(X) ? X6 ≤ C, where C is a known constant depending on f. (The positive part estimator is among them.) The loss function is a nondecreasing concave function of 6θ?? θ62. If θ is assumed likely to lie in a ball in Rp, then minimax estimators are given which shrink from the observation X outside the ball in the direction of P(X) the closest point on the surface of the ball. The amount of shrinkage depends on the distance of X from the ball.  相似文献   

2.
Let X be a p-variate (p ≥ 3) vector normally distributed with mean μ and covariance Σ, and let A be a p × p random matrix distributed independent of X, according to the Wishart distribution W(n, Σ). For estimating μ, we consider estimators of the form δ = δ(X, A). We obtain families of Bayes, minimax and admissible minimax estimators with respect to the quadratic loss function (δ ? μ)′ Σ?1(δ ? μ) where Σ is unknown. This paper extends previous results of the author [1], given for the case in which the covariance matrix of the distribution is of the form σ2I, where σ is known.  相似文献   

3.
Let X have a p-variate normal distribution with mean vector θ and identity covariance matrix I. In the squared error estimation of θ, Baranchik (1970) gives a wide family G of minimax estimators. In this paper, a subfamily C of dominating estimators in G is found such that for each estimator δ1 in G not in C, there exists an estimator δ2 in C which which dominates δ1.  相似文献   

4.
For X one observation on a p-dimensional (p ≥ 4) spherically symmetric (s.s.) distribution about θ, minimax estimators whose risks dominate the risk of X (the best invariant procedure) are found with respect to general quadratic loss, L(δ, θ) = (δ − θ)′ D(δ − θ) where D is a known p × p positive definite matrix. For C a p × p known positive definite matrix, conditions are given under which estimators of the form δa,r,C,D(X) = (I − (ar(|X|2)) D−1/2CD1/2 |X|−2)X are minimax with smaller risk than X. For the problem of estimating the mean when n observations X1, X2, …, Xn are taken on a p-dimensional s.s. distribution about θ, any spherically symmetric translation invariant estimator, δ(X1, X2, …, Xn), with have a s.s. distribution about θ. Among the estimators which have these properties are best invariant estimators, sample means and maximum likelihood estimators. Moreover, under certain conditions, improved robust estimators can be found.  相似文献   

5.
The problem of estimating, under unweighted quadratic loss, the mean of a multinormal random vector X with arbitrary covariance matrix V is considered. The results of James and Stein for the case V = I have since been extended by Bock to cover arbitrary V and also to allow for contracting X towards a subspace other than the origin; minimax estimators (other than X) exist if and only if the eigenvalues of V are not “too spread out.” In this paper a slight variation of Bock's estimator is considered. A necessary and sufficient condition for the minimaxity of the present estimator is (1): the eigenvalues of (I ? P) V should not be “too spread out,” where P denotes the projection matrix associated with the subspace towards which X is contracted. The validity of (1) is then examined for a number of patterned covariance matrices (e.g., intraclass covariance, tridiagonal and first order autocovariance) and conditions are given for (1) to hold when contraction is towards the origin or towards the common mean of the components of X. (1) is also examined when X is the usual estimate of the regression vector in multiple linear regression. In several of the cases considered the eigenvalues of V are “too spread out” while those of (I ? P) V are not, so that in these instances the present method can be used to produce a minimax estimate.  相似文献   

6.
Assume X = (X1, …, Xp)′ is a normal mixture distribution with density w.r.t. Lebesgue measure, , where Σ is a known positive definite matrix and F is any known c.d.f. on (0, ∞). Estimation of the mean vector under an arbitrary known quadratic loss function Q(θ, a) = (a − θ)′ Q(a − θ), Q a positive definite matrix, is considered. An unbiased estimator of risk is obatined for an arbitrary estimator, and a sufficient condition for estimators to be minimax is then achieved. The result is applied to modifying all the Stein estimators for the means of independent normal random variables to be minimax estimators for the problem considered here. In particular the results apply to the Stein class of limited translation estimators.  相似文献   

7.
We consider estimation of loss for generalized Bayes or pseudo-Bayes estimators of a multivariate normal mean vector, θ. In 3 and higher dimensions, the MLEX is UMVUE and minimax but is inadmissible. It is dominated by the James-Stein estimator and by many others. Johnstone (1988, On inadmissibility of some unbiased estimates of loss,Statistical Decision Theory and Related Topics, IV (eds. S. S. Gupta and J. O. Berger), Vol. 1, 361–379, Springer, New York) considered the estimation of loss for the usual estimatorX and the James-Stein estimator. He found improvements over the Stein unbiased estimator of risk. In this paper, for a generalized Bayes point estimator of θ, we compare generalized Bayes estimators to unbiased estimators of loss. We find, somewhat surprisingly, that the unbiased estimator often dominates the corresponding generalized Bayes estimator of loss for priors which give minimax estimators in the original point estimation problem. In particular, we give a class of priors for which the generalized Bayes estimator of θ is admissible and minimax but for which the unbiased estimator of loss dominates the generalized Bayes estimator of loss. We also give a general inadmissibility result for a generalized Bayes estimator of loss. Research supported by NSF Grant DMS-97-04524.  相似文献   

8.
Let X1,…, Xp be p (≥ 3) independent random variables, where each Xi has a distribution belonging to the one-parameter exponential family of distributions. The problem is to estimate the unknown parameters simultaneously in the presence of extreme observations. C. Stein (Ann. Statist.9 (1981), 1135–1151) proposed a method of estimating the mean vector of a multinormal distribution, based on order statistics corresponding to the |Xi|'s, which permitted improvement over the usual maximum likelihood estimator, for long-tailed empirical distribution functions. In this paper, the ideas of Stein are extended to the general discrete and absolutely continuous exponential families of distributions. Adaptive versions of the estimators are also discussed.  相似文献   

9.
Let X be an observation from a p-variate (p ≥ 3) normal random vector with unknown mean vector θ and known covariance matrix
. The problem of improving upon the usual estimator of θ, δ0(X) = X, is considered. An approach is developed which can lead to improved estimators, δ, for loss functions which are polynomials in the coordinates of (δ ? θ). As an example of this approach, the loss L(δ, θ) = |δ ? θ|4 is considered, and estimators are developed which are significantly better than δ0. When
is the identity matrix, these estimators are of the form δ(X) = (1 ? (b(d + |X|2)))X.  相似文献   

10.
Let XN(θ,1), where θ ϵ [−m, m], for some m > 0, and consider the problem of estimating θ with quadratic loss. We show that the Bayes estimator δm, corresponding to the uniform prior on [−m, m], dominates δ0 (x) = x on [−m, m] and it also dominates the MLE over a large part of the parameter interval. We further offer numerical evidence to suggest that δm has quite satisfactory risk performance when compared with the minimax estimators proposed by Casella and Strawderman (1981) and the estimators proposed by Bickel (1981).  相似文献   

11.
Let Σ be a σ-algebra of subsets of a non-empty set Ω. Let X be a real Banach space and let X* stand for the Banach dual of X. Let B(Σ, X) be the Banach space of Σ-totally measurable functions f: Ω → X, and let B(Σ, X)* and B(Σ, X)** denote the Banach dual and the Banach bidual of B(Σ, X) respectively. Let bvca(Σ, X*) denote the Banach space of all countably additive vector measures ν: Σ → X* of bounded variation. We prove a form of generalized Vitali-Hahn-Saks theorem saying that relative σ(bvca(Σ, X*), B(Σ, X))-sequential compactness in bvca(Σ, X*) implies uniform countable additivity. We derive that if X reflexive, then every relatively σ(B(Σ, X)*, B(Σ, X))-sequentially compact subset of B(Σ, X)c~ (= the σ-order continuous dual of B(Σ, X)) is relatively σ(B(Σ, X)*, B(Σ, X)**)-sequentially compact. As a consequence, we obtain a Grothendieck type theorem saying that σ(B(Σ, X)*, B(Σ, X))-convergent sequences in B(Σ, X)c~ are σ(B(Σ, X)*, B(Σ, X)**)-convergent.  相似文献   

12.
We define real parabolic structures on real vector bundles over a real curve. Let (X, σ X ) be a real curve, and let S???X be a non-empty finite subset of X such that σ X (S)?=?S. Let N?≥?2 be an integer. We construct an N-fold cyclic cover p : YX in the category of real curves, ramified precisely over each point of S, and with the property that for any element g of the Galois group Γ, and any y?∈?Y, one has $\sigma_Y(gy) = g^{-1}\sigma_Y(y)$ . We established an equivalence between the category of real parabolic vector bundles on (X, σ X ) with real parabolic structure over S, all of whose weights are integral multiples of 1/N, and the category of real Γ-equivariant vector bundles on (Y, σ Y ).  相似文献   

13.
Families of minimax estimators are found for the location parameters of a p-variate distribution of the form
1(2πσ2)e?(12)6X?θ62dG(σ)
, where G(·) is a known c.d.f. on (0, ∞), p ≥ 3 and the loss is sum of squared errors. The estimators are of the form (1 ? ar(X′X)E0(1X′X)X′X)X where 0 ≤ a ≤ 2, r(XX) is nondecreasing, and r(X′X)X′X is nonincreasing. Generalized Bayes minimax estimators are found for certain G(·)'s.  相似文献   

14.
Let X be a p-variate (p ≥ 3) vector normally distributed with mean θ and known covariance matrix
. It is desired to estimate θ under the quadratic loss (δ ? θ)tQ(δ ? θ), where Q is a known positive definite matrix. A broad class of minimax estimators for θ is developed.  相似文献   

15.
16.
Let X be a vector field on M3 which exhibits a saddle connection between a singularity p1 and a periodic orbit σ1. We give necessary conditions and also sufficient ones in order to have the finite modulus of stability. They rely heavily upon restrictions on the behaviour of p1 and σ1.  相似文献   

17.
In the general Gauss-Markoff model (Y, Xβ, σ2V), when V is singular, there exist linear functions of Y which vanish with probability 1 imposing some restrictions on Y as well as on the unknown β. In all earlier work on linear estimation, representations of best-linear unbiased estimators (BLUE's) are obtained under the assumption: “L′Y is unbiased for ? L′X = X.” Such a condition is not, however, necessary. The present paper provides all possible representations of the BLUE's some of which violate the condition L′X = X. Representations of X for given classes of BLUE's are also obtained.  相似文献   

18.
We consider the problem of estimating a p-dimensional vector μ1 based on independent variables X1, X2, and U, where X1 is Np1, σ2Σ1), X2 is Np2, σ2Σ2), and U is σ2χ2n (Σ1 and Σ2 are known). A family of minimax estimators is proposed. Some of these estimators can be obtained via Bayesian arguments as well. Comparisons between our results and the one of Ghosh and Sinha (1988, J. Multivariate Anal.27 206-207) are presented.  相似文献   

19.
We consider the problem of estimating the slope parameter in circular functional linear regression, where scalar responses Y 1, ..., Y n are modeled in dependence of 1-periodic, second order stationary random functions X 1, ...,X n . We consider an orthogonal series estimator of the slope function β, by replacing the first m theoretical coefficients of its development in the trigonometric basis by adequate estimators. We propose a model selection procedure for m in a set of admissible values, by defining a contrast function minimized by our estimator and a theoretical penalty function; this first step assumes the degree of ill-posedness to be known. Then we generalize the procedure to a random set of admissible m’s and a random penalty function. The resulting estimator is completely data driven and reaches automatically what is known to be the optimal minimax rate of convergence, in terms of a general weighted L 2-risk. This means that we provide adaptive estimators of both β and its derivatives.  相似文献   

20.
The behavior of the posterior for a large observation is considered. Two basic situations are discussed; location vectors and natural parameters.Let X = (X1, X2, …, Xn) be an observation from a multivariate exponential distribution with that natural parameter Θ = (Θ1, Θ2, …, Θn). Let θx* be the posterior mode. Sufficient conditions are presented for the distribution of Θ − θx* given X = x to converge to a multivariate normal with mean vector 0 as |x| tends to infinity. These same conditions imply that E(Θ | X = x) − θx* converges to the zero vector as |x| tends to infinity.The posterior for an observation X = (X1, X2, …, Xn is considered for a location vector Θ = (Θ1, Θ2, …, Θn) as x gets large along a path, γ, in Rn. Sufficient conditions are given for the distribution of γ(t) − Θ given X = γ(t) to converge in law as t → ∞. Slightly stronger conditions ensure that γ(t) − E(Θ | X = γ(t)) converges to the mean of the limiting distribution.These basic results about the posterior mean are extended to cover other estimators. Loss functions which are convex functions of absolute error are considered. Let δ be a Bayes estimator for a loss function of this type. Generally, if the distribution of Θ − E(Θ | X = γ(t)) given X = γ(t) converges in law to a symmetric distribution as t → ∞, it is shown that δ(γ(t)) − E(Θ | X = γ(t)) → 0 as t → ∞.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号