首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In Babenko and Belitser (2010), a new notion for the posterior concentration rate is proposed, the so-called oracle risk rate, the best possible rate over an appropriately chosen estimators family, which is a local quantity (as compared, e.g., with global minimax rates). The program of oracle estimation and Bayes oracle posterior optimality is fully implemented in the above paper for the Gaussian white noise model and the projection estimators family.In this note, we complement the upper bound results of Babenko and Belitser (2010) on the posterior concentration rate by a lower bound result, namely that the concentration rate of the posterior distribution around the ‘true’ value cannot be faster than the oracle projection rate.  相似文献   

2.
 In this paper, we establish oracle inequalities for penalized projection estimators of the intensity of an inhomogeneous Poisson process. We study consequently the adaptive properties of penalized projection estimators. At first we provide lower bounds for the minimax risk over various sets of smoothness for the intensity and then we prove that our estimators achieve these lower bounds up to some constants. The crucial tools to obtain the oracle inequalities are new concentration inequalities for suprema of integral functionals of Poisson processes which are analogous to Talagrand's inequalities for empirical processes. Received: 24 April 2001 / Revised version: 9 October 2002 / Published online: 15 April 2003 Mathematics Subject Classification (2000): 60E15, 62G05, 62G07 Key words or phrases: Inhomogeneous Poisson process – Concentration inequalities – Model selection – Penalized projection estimator – Adaptive estimation  相似文献   

3.
We consider estimation of loss for generalized Bayes or pseudo-Bayes estimators of a multivariate normal mean vector, θ. In 3 and higher dimensions, the MLEX is UMVUE and minimax but is inadmissible. It is dominated by the James-Stein estimator and by many others. Johnstone (1988, On inadmissibility of some unbiased estimates of loss,Statistical Decision Theory and Related Topics, IV (eds. S. S. Gupta and J. O. Berger), Vol. 1, 361–379, Springer, New York) considered the estimation of loss for the usual estimatorX and the James-Stein estimator. He found improvements over the Stein unbiased estimator of risk. In this paper, for a generalized Bayes point estimator of θ, we compare generalized Bayes estimators to unbiased estimators of loss. We find, somewhat surprisingly, that the unbiased estimator often dominates the corresponding generalized Bayes estimator of loss for priors which give minimax estimators in the original point estimation problem. In particular, we give a class of priors for which the generalized Bayes estimator of θ is admissible and minimax but for which the unbiased estimator of loss dominates the generalized Bayes estimator of loss. We also give a general inadmissibility result for a generalized Bayes estimator of loss. Research supported by NSF Grant DMS-97-04524.  相似文献   

4.
This paper presents two results: a density estimator and an estimator of regression error density. We first propose a density estimator constructed by model selection, which is adaptive for the quadratic risk at a given point. Then we apply this result to estimate the error density in a homoscedastic regression framework Y i = b(X i ) + ε i from which we observe a sample (X i , Y i ). Given an adaptive estimator $ \hat b $ \hat b of the regression function, we apply the density estimation procedure to the residuals $ \hat \varepsilon _i = Y_i - \hat b(X_i ) $ \hat \varepsilon _i = Y_i - \hat b(X_i ) . We get an estimator of the density of ε i whose rate of convergence for the quadratic pointwise risk is the maximum of two rates: the minimax rate we would get if the errors were directly observed and the minimax rate of convergence of $ \hat b $ \hat b for the quadratic integrated risk.  相似文献   

5.
We propose an algorithm to estimate the common density s of a stationary process X 1, ..., X n . We suppose that the process is either β or τ-mixing. We provide a model selection procedure based on a generalization of Mallows’ C p and we prove oracle inequalities for the selected estimator under a few prior assumptions on the collection of models and on the mixing coefficients. We prove that our estimator is adaptive over a class of Besov spaces, namely, we prove that it achieves the same rates of convergence as in the i.i.d. framework.   相似文献   

6.
This paper treats the problem of estimating the restricted means of normal distributions with a known variance, where the means are restricted to a polyhedral convex cone which includes various restrictions such as positive orthant, simple order, tree order and umbrella order restrictions. In the context of the simultaneous estimation of the restricted means, it is of great interest to investigate decision-theoretic properties of the generalized Bayes estimator against the uniform prior distribution over the polyhedral convex cone. In this paper, the generalized Bayes estimator is shown to be minimax. It is also proved that it is admissible in the one- or two-dimensional case, but is improved on by a shrinkage estimator in the three- or more-dimensional case. This means that the so-called Stein phenomenon on the minimax generalized Bayes estimator can be extended to the case where the means are restricted to the polyhedral convex cone. The risk behaviors of the estimators are investigated through Monte Carlo simulation, and it is revealed that the shrinkage estimator has a substantial risk reduction.  相似文献   

7.
Locally Adaptive Wavelet Empirical Bayes Estimation of a Location Parameter   总被引:1,自引:0,他引:1  
The traditional empirical Bayes (EB) model is considered with the parameter being a location parameter, in the situation when the Bayes estimator has a finite degree of smoothness and, possibly, jump discontinuities at several points. A nonlinear wavelet EB estimator based on wavelets with bounded supports is constructed, and it is shown that a finite number of jump discontinuities in the Bayes estimator do not affect the rate of convergence of the prior risk of the EB estimator to zero. It is also demonstrated that the estimator adjusts to the degree of smoothness of the Bayes estimator, locally, so that outside the neighborhoods of the points of discontinuities, the posterior risk has a high rate of convergence to zero. Hence, the technique suggested in the paper provides estimators which are significantly superior in several respects to those constructed earlier.  相似文献   

8.
In this paper we address the problem of estimating θ1 when , are observed and |θ1θ2|?c for a known constant c. Clearly Y2 contains information about θ1. We show how the so-called weighted likelihood function may be used to generate a class of estimators that exploit that information. We discuss how the weights in the weighted likelihood may be selected to successfully trade bias for precision and thus use the information effectively. In particular, we consider adaptively weighted likelihood estimators where the weights are selected using the data. One approach selects such weights in accord with Akaike's entropy maximization criterion. We describe several estimators obtained in this way. However, the maximum likelihood estimator is investigated as a competitor to these estimators along with a Bayes estimator, a class of robust Bayes estimators and (when c is sufficiently small), a minimax estimator. Moreover we will assess their properties both numerically and theoretically. Finally, we will see how all of these estimators may be viewed as adaptively weighted likelihood estimators. In fact, an over-riding theme of the paper is that the adaptively weighted likelihood method provides a powerful extension of its classical counterpart.  相似文献   

9.
In the framework of nonparametric multivariate function estimation we are interested in structural adaptation. We assume that the function to be estimated has the “single-index” structure where neither the link function nor the index vector is known. This article suggests a novel procedure that adapts simultaneously to the unknown index and the smoothness of the link function. For the proposed procedure, we prove a “local” oracle inequality (described by the pointwise seminorm), which is then used to obtain the upper bound on the maximal risk of the adaptive estimator under assumption that the link function belongs to a scale of Hölder classes. The lower bound on the minimax risk shows that in the case of estimating at a given point the constructed estimator is optimally rate adaptive over the considered range of classes. For the same procedure we also establish a “global” oracle inequality (under the L r norm, r < ∞) and examine its performance over the Nikol’skii classes. This study shows that the proposed method can be applied to estimating functions of inhomogeneous smoothness, that is whose smoothness may vary from point to point.  相似文献   

10.
This paper addresses the problem of estimating the density of a future outcome from a multivariate normal model. We propose a class of empirical Bayes predictive densities and evaluate their performances under the Kullback–Leibler (KL) divergence. We show that these empirical Bayes predictive densities dominate the Bayesian predictive density under the uniform prior and thus are minimax under some general conditions. We also establish the asymptotic optimality of these empirical Bayes predictive densities in infinite-dimensional parameter spaces through an oracle inequality.  相似文献   

11.
We study the problem of aggregation of estimators. Given a collection of M different estimators, we construct a new estimator, called aggregate, which is nearly as good as the best linear combination over an l 1-ball of ℝM of the initial estimators. The aggregate is obtained by a particular version of the mirror averaging algorithm. We show that our aggregation procedure statisfies sharp oracle inequalities under general assumptions. Then we apply these results to a new aggregation problem: D-convex aggregation. Finally we implement our procedure in a Gaussian regression model with random design and we prove its optimality in a minimax sense up to a logarithmic factor.   相似文献   

12.
We consider estimation of a multivariate normal mean vector under sum of squared error loss.We propose a new class of minimax admissible estimator which are generalized Bayes with respect to a prior distribution which is a mixture of a point prior at the origin and a continuous hierarchical type prior. We also study conditions under which these generalized Bayes minimax estimators improve on the James–Stein estimator and on the positive-part James–Stein estimator.  相似文献   

13.
本文在错误指定下给出了多元线性模型的最优线性 Bayes估计 ,在矩阵损失下讨论了其相对于最小二乘法估计的优良性 ,且获得 Bayes估计的容许性和极小极大性  相似文献   

14.
We study the problem of the nonparametric estimation of a probability density in L2(R). Expressing the mean integrated squared error in the Fourier domain, we show that it is close to the mean squared error in the Gaussian sequence model. Then, applying a modified version of Stein's blockwise method, we obtain a linear monotone oracle inequality and a kernel oracle inequality. As a consequence, the proposed estimator is sharp minimax adaptive (i.e. up to a constant) on a scale of Sobolev classes of densities. To cite this article: Ph. Rigollet, C. R. Acad. Sci. Paris, Ser. I 340 (2005).  相似文献   

15.
Minimax nonhomogeneous linear estimators of scalar linear parameter functions are studied in the paper under restrictions on the parameters and variance-covariance matrix. The variance-covariance matrix of the linear model under consideration is assumed to be unknown but from a specific set R of nonnegativedefinite matrices. It is shown under this assumption that, without any restriction on the parameters, minimax estimators correspond to the least-squares estimators of the parameter functions for the “worst” variance-covariance matrix. Then the minimax mean-square error of the estimator is derived using the Bayes approach, and finally the exact formulas are derived for the calculation of minimax estimators under elliptical restrictions on the parameter space and for two special classes of possible variance-covariance matrices R. For example, it is shown that a special choice of a constant q 0 and a matrixW 0 defining one of the above classes R leads to the well known Kuks—Olman admissible estimator (see [16]) with a known variance-covariance matrixW 0. Bibliography:32 titles. Translated fromObchyslyuval'na ta Prykladna Matematyka, No. 81, 1997, pp. 79–92.  相似文献   

16.
We consider the linear regression model where prior information in the form of linear inequalities restricts the parameter space to a polyhedron. Since the linear minimax estimator has, in general, to be determined numerically, it was proposed to minimize an upper bound of the maximum risk instead. The resulting so-called quasiminimax estimator can be easily calculated in closed form. Unfortunately, both minimax estimators may violate the prior information. Therefore, we consider projection estimators which are obtained by projecting the estimate in an optional second step. The performance of these estimators is investigated in a Monte Carlo study together with several least squares estimators, including the inequality restricted least squares estimator. It turns out that both the projected and the unprojected quasiminimax estimators have the best average performance.  相似文献   

17.
For a vast array of general spherically symmetric location-scale models with a residual vector, we consider estimating the (univariate) location parameter when it is lower bounded. We provide conditions for estimators to dominate the benchmark minimax MRE estimator, and thus be minimax under scale invariant loss. These minimax estimators include the generalized Bayes estimator with respect to the truncation of the common non-informative prior onto the restricted parameter space for normal models under general convex symmetric loss, as well as non-normal models under scale invariant \(L^p\) loss with \(p>0\) . We cover many other situations when the loss is asymmetric, and where other generalized Bayes estimators, obtained with different powers of the scale parameter in the prior measure, are proven to be minimax. We rely on various novel representations, sharp sign change analyses, as well as capitalize on Kubokawa’s integral expression for risk difference technique. Several properties such as robustness of the generalized Bayes estimators under various loss functions are obtained.  相似文献   

18.
We consider an approach yielding a minimax estimator in the linear regression model with a priori information on the parameter vector, e.g., ellipsoidal restrictions. This estimator is computed directly from the loss function and can be motivated by the general Pitman nearness criterion. It turns out that this approach coincides with the projection estimator which is obtained by projecting an initial arbitrary estimate on the subset defined by the restrictions.  相似文献   

19.
In applications such as signal processing and statistics, many problems involve finding sparse solutions to under-determined linear systems of equations. These problems can be formulated as a structured nonsmooth optimization problems, i.e., the problem of minimizing 1-regularized linear least squares problems. In this paper, we propose a block coordinate gradient descent method (abbreviated as CGD) to solve the more general 1-regularized convex minimization problems, i.e., the problem of minimizing an 1-regularized convex smooth function. We establish a Q-linear convergence rate for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We propose efficient implementations of the CGD method and report numerical results for solving large-scale 1-regularized linear least squares problems arising in compressed sensing and image deconvolution as well as large-scale 1-regularized logistic regression problems for feature selection in data classification. Comparison with several state-of-the-art algorithms specifically designed for solving large-scale 1-regularized linear least squares or logistic regression problems suggests that an efficiently implemented CGD method may outperform these algorithms despite the fact that the CGD method is not specifically designed just to solve these special classes of problems.  相似文献   

20.
We show that Bayes estimators of an unknown density can adapt to unknown smoothness of the density. We combine prior distributions on each element of a list of log spline density models of different levels of regularity with a prior on the regularity levels to obtain a prior on the union of the models in the list. If the true density of the observations belongs to the model with a given regularity, then the posterior distribution concentrates near this true density at the rate corresponding to this regularity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号