首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider estimation of loss for generalized Bayes or pseudo-Bayes estimators of a multivariate normal mean vector, θ. In 3 and higher dimensions, the MLEX is UMVUE and minimax but is inadmissible. It is dominated by the James-Stein estimator and by many others. Johnstone (1988, On inadmissibility of some unbiased estimates of loss,Statistical Decision Theory and Related Topics, IV (eds. S. S. Gupta and J. O. Berger), Vol. 1, 361–379, Springer, New York) considered the estimation of loss for the usual estimatorX and the James-Stein estimator. He found improvements over the Stein unbiased estimator of risk. In this paper, for a generalized Bayes point estimator of θ, we compare generalized Bayes estimators to unbiased estimators of loss. We find, somewhat surprisingly, that the unbiased estimator often dominates the corresponding generalized Bayes estimator of loss for priors which give minimax estimators in the original point estimation problem. In particular, we give a class of priors for which the generalized Bayes estimator of θ is admissible and minimax but for which the unbiased estimator of loss dominates the generalized Bayes estimator of loss. We also give a general inadmissibility result for a generalized Bayes estimator of loss. Research supported by NSF Grant DMS-97-04524.  相似文献   

2.
Summary It is shown that the relative error of the bootstrap quantile variance estimator is of precise order n -1/4, when n denotes sample size. Likewise, the error of the bootstrap sparsity function estimator is of precise order n -1/4. Therefore as point estimators these estimators converge more slowly than the Bloch-Gastwirth estimator and kernel estimators, which typically have smaller error of order at most n -2/5.  相似文献   

3.
In the simultaneous estimation of means from independent Poisson distributions, an estimator is developed which incorporates a prior mean and variance for each Poisson mean estimated. This estimator possesses substantially smaller risk than the usual estimator in a region of the parameter space and seems superior to other estimators proposed to estimate p Poisson means. It is indicated through two asymptotic results that, unlike the conjugate Bayes estimator, the risk of the estimator does not greatly exceed the risk of the usual estimator outside of the region of risk improvement.  相似文献   

4.
We study the asymptotic distribution of the L 1 regression estimator under general conditions with matrix norming and possibly non i.i.d. errors. We then introduce an appropriate bootstrap procedure to estimate the distribution of this estimator and study its asymptotic properties. It is shown that this bootstrap is consistent under suitable conditions and in other situations the bootstrap limit is a random distribution. This work was supported by J.C. Bose National Fellowship, Government of India  相似文献   

5.
In this paper we are concerned with finite element approximations to the evaluation of American options. First, following W. Allegretto etc., SIAM J. Numer. Anal. 39 (2001), 834–857, we introduce a novel practical approach to the discussed problem, which involves the exact reformulation of the original problem and the implementation of the numerical solution over a very small region so that this algorithm is very rapid and highly accurate. Secondly by means of a superapproximation and interpolation postprocessing analysis technique, we present sharp L 2-, L -norm error estimates and an H 1-norm superconvergence estimate for this finite element method. As a by-product, the global superconvergence result can be used to generate an efficient a posteriori error estimator. This work was supported in part by the National Natural Science Foundation of China (10471103 and 10771158), the National Basic Research Program (2007CB814906), Social Science Foundation of the Ministry of Education of China (Numerical Methods for Convertible Bonds, 06JA630047), Tianjin Natural Science Foundation (07JCY-BJC14300), and Tianjin University of Finance and Economics.  相似文献   

6.
Process yield is an important criterion used in the manufacturing industry for measuring process performance. Methods for measuring yield for processes with single characteristic have been investigated extensively. However, methods for measuring yield for processes with multiple characteristics have been comparatively neglected. Chen et al. (Qual Reliab Eng Int 19:101–110, 2003) proposed a measurement formula called SpkT{S_{pk}^T } , which provides an exact measure of the overall process yield, for processes with multiple characteristics. In this paper, we considered the natural estimator of SpkT{S_{pk}^T } under multiple samples, and derived the asymptotic distribution for the estimator. In addition, a comparison between the SB (standard bootstrap) and the proposed method based on the lower confidence bound is executed. Generally, the result indicates that the proposed approach is more reliable than the standard bootstrap method.  相似文献   

7.
We introduce an estimator for the population mean based on maximizing likelihoods formed by parameterizing a kernel density estimate. Due to these origins, we have dubbed the estimator the maximum kernel likelihood estimate (MKLE). A speedy computational method to compute the MKLE based on binning is implemented in a simulation study which shows that the MKLE at an optimal bandwidth is decidedly superior in terms of efficiency to the sample mean and other measures of location for heavy tailed symmetric distributions. An empirical rule and a computational method to estimate this optimal bandwidth are developed and used to construct bootstrap confidence intervals for the population mean. We show that the intervals have approximately nominal coverage and have significantly smaller average width than the standard t and z intervals. Finally, we develop some mathematical properties for a very close approximation to the MKLE called the kernel mean. In particular, we demonstrate that the kernel mean is indeed unbiased for the population mean for symmetric distributions.  相似文献   

8.
If the underlying distribution functionF is smooth it is known that the convergence rate of the standard bootstrap quantile estimator can be improved fromn –1/4 ton –1/2+, for arbitrary >0, by using a smoothed bootstrap. We show that a further significant improvement of this rate is achieved by studentizing by means of a kernel density estimate. As a consequence, it turns out that the smoothed bootstrap percentile-t method produces confidence intervals with critical points being second-order correct and having smaller length than competitors based on hybrid or on backwards critical points. Moreover, the percentile-t method for constructing one-sided or two-sided confidence intervals leads to coverage accuracies of ordern –1+, for arbitrary >0, in the case of analytic distribution functions.  相似文献   

9.
We consider the problem of estimating the variance of a sample quantile calculated from a random sample of sizen. Ther-th-order kernel-smoothed bootstrap estimator is known to yield an impressively small relative error of orderO(n −r/(2r+1) ). It nevertheless requires strong smoothness conditions on the underlying density function, and has a performance very sensitive to the precise choice of the bandwidth. The unsmoothed bootstrap has a poorer relative error of orderO(n −1/4), but works for less smooth density functions. We investigate a modified form of the bootstrap, known as them out ofn bootstrap, and show that it yields a relative error of order smaller thanO(n −1/4) under the same smoothness conditions required by the conventional unsmoothed bootstrap on the density function, provided that the bootstrap sample sizem is of an appropriate order. The estimator permits exact, simulation-free, computation and has accuracy fairly insensitive to the precise choice ofm. A simulation study is reported to provide empirical comparison of the various methods. Supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. HKU 7131/00P).  相似文献   

10.
Simulation models support managers in the solution of complex problems. International agencies recommend uncertainty and global sensitivity methods as best practice in the audit, validation and application of scientific codes. However, numerical complexity, especially in the presence of a high number of factors, induces analysts to employ less informative but numerically cheaper methods. This work introduces a design for estimating global sensitivity indices from given data (including simulation input–output data), at the minimum computational cost. We address the problem starting with a statistic based on the L1-norm. A formal definition of the estimators is provided and corresponding consistency theorems are proved. The determination of confidence intervals through a bias-reducing bootstrap estimator is investigated. The strategy is applied in the identification of the key drivers of uncertainty for the complex computer code developed at the National Aeronautics and Space Administration (NASA) assessing the risk of lunar space missions. We also introduce a symmetry result that enables the estimation of global sensitivity measures to datasets produced outside a conventional input–output functional framework.  相似文献   

11.
Summary The product-limit estimator and its quantile process are represented as i.i.d. mean processes, with a remainder of ordern –3/4(logn)3/4 a.s. Corresponding bootstrap versions of these representations are given, which can help one visualize how the bootstrap procedure operates in this set up.Research supported by NSF grants MCS-81-02341 and MCS 83-01082  相似文献   

12.
Abstract

In this article, we combine Donoho and Johnstone's wavelet shrinkage denoising technique (known as WaveShrink) with Breiman's non-negative garrote. We show that the non-negative garrote shrinkage estimate enjoys the same asymptotic convergence rate as the hard and the soft shrinkage estimates. Simulations are used to demonstrate that garrote shrinkage offers advantages over both hard shrinkage (generally smaller mean-square-error and less sensitivity to small perturbations in the data) and soft shrinkage (generally smaller bias and overall mean-square-error). The minimax thresholds for the non-negative garrote are derived and the threshold selection procedure based on Stein's unbiased risk estimate (SURE) is studied. We also propose a threshold selection procedure based on combining Coifman and Donoho's cycle-spinning and SURE. The procedure is called SPINSURE. We use examples to show that SPINSURE is more stable than SURE: smaller standard deviation and smaller range.  相似文献   

13.
In this paper, we use the kernel method to estimate sliced average variance estimation (SAVE) and prove that this estimator is both asymptotically normal and root n consistent. We use this kernel estimator to provide more insight about the differences between slicing estimation and other sophisticated local smoothing methods. Finally, we suggest a Bayes information criterion (BIC) to estimate the dimensionality of SAVE. Examples and real data are presented for illustrating our method.  相似文献   

14.
Summary Stein's positive part estimator forp normal means is known to dominate the M.L.E. ifp≧3. In this article by introducing some proirs we show that Stein's positive part estimator is posterior mode. We also consider the Bayes estimators (posterior mean) with respect to the same priors and show that some of them dominate M.L.E. and are admissible.  相似文献   

15.
We propose a new variational Bayes (VB) estimator for high-dimensional copulas with discrete, or a combination of discrete and continuous, margins. The method is based on a variational approximation to a tractable augmented posterior and is faster than previous likelihood-based approaches. We use it to estimate drawable vine copulas for univariate and multivariate Markov ordinal and mixed time series. These have dimension rT, where T is the number of observations and r is the number of series, and are difficult to estimate using previous methods. The vine pair-copulas are carefully selected to allow for heteroscedasticity, which is a feature of most ordinal time series data. When combined with flexible margins, the resulting time series models also allow for other common features of ordinal data, such as zero inflation, multiple modes, and under or overdispersion. Using six example series, we illustrate both the flexibility of the time series copula models and the efficacy of the VB estimator for copulas of up to 792 dimensions and 60 parameters. This far exceeds the size and complexity of copula models for discrete data that can be estimated using previous methods. An online appendix and MATLAB code implementing the method are available as supplementary materials.  相似文献   

16.
An optimal equivariant Bayes estimate of the density of a matrix normal distribution is obtained. This estimate is applied to the construction of the optimal Bayes group classification rule. Translated fromStatisticheskie Metody Otsenivaniya i Proverki Gipotez, pp. 29–39, Perm, 1990.  相似文献   

17.
In this article, Bayes estimation of location parameters under restriction is broughtforth. Since Bayes estimator is closely connected with the first value of order statistics that canbe observed, it is possible to consider “complete data” method, through which the pseudo-value of first order statistics and pseudo-right censored samples can he obtained. Thus the results under Type- Ⅱ right censoring can be used directly to get more accurate estimators by Bayes method.  相似文献   

18.
The multivariate normal regression model, in which a vector y of responses is to be predicted by a vector x of explanatory variables, is considered. A hierarchical framework is used to express prior information on both x and y. An empirical Bayes estimator is developed which shrinks the maximum likelihood estimator of the matrix of regression coefficients across rows and columns to nontrivial subspaces which reflect both types of prior information. The estimator is shown to be minimax and is applied to a set of chemometrics data for which it reduces the cross-validated predicted mean squared error of the maximum likelihood estimator by 38%.  相似文献   

19.
Abstract

This article makes three contributions. First, we introduce a computationally efficient estimator for the component functions in additive nonparametric regression exploiting a different motivation from the marginal integration estimator of Linton and Nielsen. Our method provides a reduction in computation of order n which is highly significant in practice. Second, we define an efficient estimator of the additive components, by inserting the preliminary estimator into a backfitting˙ algorithm but taking one step only, and establish that it is equivalent, in various senses, to the oracle estimator based on knowing the other components. Our two-step estimator is minimax superior to that considered in Opsomer and Ruppert, due to its better bias. Third, we define a bootstrap algorithm for computing pointwise confidence intervals and show that it achieves the correct coverage.  相似文献   

20.
Many applications aim to learn a high dimensional parameter of a data generating distribution based on a sample of independent and identically distributed observations. For example, the goal might be to estimate the conditional mean of an outcome given a list of input variables. In this prediction context, bootstrap aggregating (bagging) has been introduced as a method to reduce the variance of a given estimator at little cost to bias. Bagging involves applying an estimator to multiple bootstrap samples and averaging the result across bootstrap samples. In order to address the curse of dimensionality, a common practice has been to apply bagging to estimators which themselves use cross-validation, thereby using cross-validation within a bootstrap sample to select fine-tuning parameters trading off bias and variance of the bootstrap sample-specific candidate estimators. In this article we point out that in order to achieve the correct bias variance trade-off for the parameter of interest, one should apply the cross-validation selector externally to candidate bagged estimators indexed by these fine-tuning parameters. We use three simulations to compare the new cross-validated bagging method with bagging of cross-validated estimators and bagging of non-cross-validated estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号