首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the linear regression model with ellipsoidal parameter constraints, the problem of estimating the unknown parameter vector is studied. A well-described subclass of Bayes linear estimators is proposed in the paper. It is shown that for each member of this subclass, a generalized quadratic risk function exists so that the estimator is minimax. Moreover, some of the proposed Bayes linear estimators are admissible with respect to all possible generalized quadratic risks. Also, a necessary and sufficient condition is given to ensure that the considered Bayes linear estimator improves the least squares estimator over the whole ellipsoid whatever generalized risk function is chosen.  相似文献   

2.
We consider the simultaneous linear minimax estimation problem in linear models with ellipsoidal constraints imposed on an unknown parameter. Using convex analysis, we derive necessary and sufficient optimality conditions for a matrix to define the linear minimax estimator. For certain regions of the set of characteristics of linear models and constraints, we exploit these optimality conditions and get explicit formulae for linear minimax estimators.  相似文献   

3.
The problem of optimal prediction in the stochastic linear regression model with infinitely many parameters is considered. We suggest a prediction method that outperforms asymptotically the ordinary least squares predictor. Moreover, if the random errors are Gaussian, the method is asymptotically minimax over ellipsoids in ?2. The method is based on a regularized least squares estimator with weights of the Pinsker filter. We also consider the case of dynamic linear regression, which is important in the context of transfer function modeling.  相似文献   

4.
In this paper, we consider the problem of estimating the covariance matrix and the generalized variance when the observations follow a nonsingular multivariate normal distribution with unknown mean. A new method is presented to obtain a truncated estimator that utilizes the information available in the sample mean matrix and dominates the James-Stein minimax estimator. Several scale equivariant minimax estimators are also given. This method is then applied to obtain new truncated and improved estimators of the generalized variance; it also provides a new proof to the results of Shorrock and Zidek (Ann. Statist. 4 (1976) 629) and Sinha (J. Multivariate Anal. 6 (1976) 617).  相似文献   

5.
6.
We consider a panel data semiparametric partially linear regression model with an unknown vector β of regression coefficients, an unknown nonparametric function g(·) for nonlinear component, and unobservable serially correlated errors. The correlated errors are modeled by a vector autoregressive process which involves a constant intraclass correlation. Applying the pilot estimators of β and g(·), we construct estimators of the autoregressive coefficients, the intraclass correlation and the error variance, and investigate their asymptotic properties. Fitting the error structure results in a new semiparametric two-step estimator of β, which is shown to be asymptotically more efficient than the usual semiparametric least squares estimator in terms of asymptotic covariance matrix. Asymptotic normality of this new estimator is established, and a consistent estimator of its asymptotic covariance matrix is presented. Furthermore, a corresponding estimator of g(·) is also provided. These results can be used to make asymptotically efficient statistical inference. Some simulation studies are conducted to illustrate the finite sample performances of these proposed estimators.  相似文献   

7.
In this paper we consider the problem of estimating the matrix of regression coefficients in a multivariate linear regression model in which the design matrix is near singular. Under the assumption of normality, we propose empirical Bayes ridge regression estimators with three types of shrinkage functions, that is, scalar, componentwise and matricial shrinkage. These proposed estimators are proved to be uniformly better than the least squares estimator, that is, minimax in terms of risk under the Strawderman's loss function. Through simulation and empirical studies, they are also shown to be useful in the multicollinearity cases.  相似文献   

8.
This paper is devoted to the problem of minimax estimation of parameters in linear regression models with uncertain second order statistics. The solution to the problem is shown to be the least squares estimator corresponding to the least favourable matrix of the second moments. This allows us to construct a new algorithm for minimax estimation closely connected with the least squares method. As an example, we consider the problem of polynomial regression introduced by A. N. Kolmogorov  相似文献   

9.
We consider two problems: (1) estimate a normal mean under a general divergence loss introduced in [S. Amari, Differential geometry of curved exponential families — curvatures and information loss, Ann. Statist. 10 (1982) 357-387] and [N. Cressie, T.R.C. Read, Multinomial goodness-of-fit tests, J. Roy. Statist. Soc. Ser. B. 46 (1984) 440-464] and (2) find a predictive density of a new observation drawn independently of observations sampled from a normal distribution with the same mean but possibly with a different variance under the same loss. The general divergence loss includes as special cases both the Kullback-Leibler and Bhattacharyya-Hellinger losses. The sample mean, which is a Bayes estimator of the population mean under this loss and the improper uniform prior, is shown to be minimax in any arbitrary dimension. A counterpart of this result for predictive density is also proved in any arbitrary dimension. The admissibility of these rules holds in one dimension, and we conjecture that the result is true in two dimensions as well. However, the general Baranchick [A.J. Baranchick, a family of minimax estimators of the mean of a multivariate normal distribution, Ann. Math. Statist. 41 (1970) 642-645] class of estimators, which includes the James-Stein estimator and the Strawderman [W.E. Strawderman, Proper Bayes minimax estimators of the multivariate normal mean, Ann. Math. Statist. 42 (1971) 385-388] class of estimators, dominates the sample mean in three or higher dimensions for the estimation problem. An analogous class of predictive densities is defined and any member of this class is shown to dominate the predictive density corresponding to a uniform prior in three or higher dimensions. For the prediction problem, in the special case of Kullback-Leibler loss, our results complement to a certain extent some of the recent important work of Komaki [F. Komaki, A shrinkage predictive distribution for multivariate normal observations, Biometrika 88 (2001) 859-864] and George, Liang and Xu [E.I. George, F. Liang, X. Xu, Improved minimax predictive densities under Kullbak-Leibler loss, Ann. Statist. 34 (2006) 78-92]. While our proposed approach produces a general class of predictive densities (not necessarily Bayes, but not excluding Bayes predictors) dominating the predictive density under a uniform prior. We show also that various modifications of the James-Stein estimator continue to dominate the sample mean, and by the duality of estimation and predictive density results which we will show, similar results continue to hold for the prediction problem as well.  相似文献   

10.
We consider an approach yielding a minimax estimator in the linear regression model with a priori information on the parameter vector, e.g., ellipsoidal restrictions. This estimator is computed directly from the loss function and can be motivated by the general Pitman nearness criterion. It turns out that this approach coincides with the projection estimator which is obtained by projecting an initial arbitrary estimate on the subset defined by the restrictions.  相似文献   

11.
We present a new approach to univariate partial least squares regression (PLSR) based on directional signal-to-noise ratios (SNRs). We show how PLSR, unlike principal components regression, takes into account the actual value and not only the variance of the ordinary least squares (OLS) estimator. We find an orthogonal sequence of directions associated with decreasing SNR. Then, we state partial least squares estimators as least squares estimators constrained to be null on the last directions. We also give another procedure that shows how PLSR rebuilds the OLS estimator iteratively by seeking at each step the direction with the largest difference of signals over the noise. The latter approach does not involve any arbitrary scale or orthogonality constraints.  相似文献   

12.
This paper is concerned with the problem of estimating a matrix of means in multivariate normal distributions with an unknown covariance matrix under invariant quadratic loss. It is first shown that the modified Efron-Morris estimator is characterized as a certain empirical Bayes estimator. This estimator modifies the crude Efron-Morris estimator by adding a scalar shrinkage term. It is next shown that the idea of this modification provides a general method for improvement of estimators, which results in the further improvement on several minimax estimators. As a new method for improvement, an adaptive combination of the modified Stein and the James-Stein estimators is also proposed and is shown to be minimax. Through Monte Carlo studies of the risk behaviors, it is numerically shown that the proposed, combined estimator inherits the nice risk properties of both individual estimators and thus it has a very favorable risk behavior in a small sample case. Finally, the application to a two-way layout MANOVA model with interactions is discussed.  相似文献   

13.
We establish the consistency, asymptotic normality, and efficiency for estimators derived by minimizing the median of a loss function in a Bayesian context. We contrast this procedure with the behavior of two Frequentist procedures, the least median of squares (LMS) and the least trimmed squares (LTS) estimators, in regression problems. The LMS estimator is the Frequentist version of our estimator, and the LTS estimator approaches a median-based estimator as the trimming approaches 50% on each side. We argue that the Bayesian median-based method is a good tradeoff between the two Frequentist estimators.  相似文献   

14.
In this paper, we propose a new estimator for a kurtosis in a multivariate nonnormal linear regression model. Usually, an estimator is constructed from an arithmetic mean of the second power of the squared sample Mahalanobis distances between observations and their estimated values. The estimator gives an underestimation and has a large bias, even if the sample size is not small. We replace this squared distance with a transformed squared norm of the Studentized residual using a monotonic increasing function. Our proposed estimator is defined by an arithmetic mean of the second power of these squared transformed squared norms with a correction term and a tuning parameter. The correction term adjusts our estimator to an unbiased estimator under normality, and the tuning parameter controls the sizes of the squared norms of the residuals. The family of our estimators includes estimators based on ordinary least squares and predicted residuals. We verify that the bias of our new estimator is smaller than usual by constructing numerical experiments.  相似文献   

15.
If the errors in the linear regression model are assumed to be independent with nonvanishing third and finite fourth moments, then it is possible to improve all linear estimators by so-called linear plus quadratic (LPQ) estimators. These consist of linear and quadratic terms in the endogeneous variable and depend on the unknown moments of the errors which, in general, have to be estimated from the data. In this paper, we will use LPQ estimators for quasiminimax estimation and some related problems.Support by Deutsche Forschungsgemeinschaft Grant No. Tr 253/1-2 is gratefully acknowledged.  相似文献   

16.
In this paper we address the problem of estimating θ1 when , are observed and |θ1θ2|?c for a known constant c. Clearly Y2 contains information about θ1. We show how the so-called weighted likelihood function may be used to generate a class of estimators that exploit that information. We discuss how the weights in the weighted likelihood may be selected to successfully trade bias for precision and thus use the information effectively. In particular, we consider adaptively weighted likelihood estimators where the weights are selected using the data. One approach selects such weights in accord with Akaike's entropy maximization criterion. We describe several estimators obtained in this way. However, the maximum likelihood estimator is investigated as a competitor to these estimators along with a Bayes estimator, a class of robust Bayes estimators and (when c is sufficiently small), a minimax estimator. Moreover we will assess their properties both numerically and theoretically. Finally, we will see how all of these estimators may be viewed as adaptively weighted likelihood estimators. In fact, an over-riding theme of the paper is that the adaptively weighted likelihood method provides a powerful extension of its classical counterpart.  相似文献   

17.
The estimation problem of the parameters in a symmetry model for categorical data has been considered for many authors in the statistical literature (for example, Bowker (1948) [1], Ireland et al. (1969) [2], Quade and Salama (1975) [3], Cressie and Read (1988) [4], Menéndez et al. (2005) [5]) without using uncertain prior information. It is well known that many new and interesting estimators, using uncertain prior information, have been studied by a host of researchers in different statistical models, and many papers have been published on this topic (see Saleh (2006) [9] and references therein). In this paper, we consider the symmetry model of categorical data and we study, for the first time, some new estimators when non-sample information about the symmetry of the probabilities is considered. The decision to use a “restricted” estimator or an “unrestricted” estimator is based on the outcome of a preliminary test, and then a shrinkage technique is used. It is interesting to note that we present a unified study in the sense that we consider not only the maximum likelihood estimator and likelihood ratio test or chi-square test statistic but we consider minimum phi-divergence estimators and phi-divergence test statistics. Families of minimum phi-divergence estimators and phi-divergence test statistics are wide classes of estimators and test statistics that contain as a particular case the maximum likelihood estimator, likelihood ratio test and chi-square test statistic. In an asymptotic set-up, the biases and the risk under the squared loss function for the proposed estimators are derived and compared. A numerical example clarifies the content of the paper.  相似文献   

18.
This paper treats the problem of estimating positive parameters restricted to a polyhedral convex cone which includes typical order restrictions, such as simple order, tree order and umbrella order restrictions. In this paper, two methods are used to show the improvement of order-preserving estimators over crude non-order-preserving estimators without any assumption on underlying distributions. One is to use Fenchel’s duality theorem, and then the superiority of the isotonic regression estimator is established under the general restriction to polyhedral convex cones. The use of the Abel identity is the other method, and we can derive a class of improved estimators which includes order-statistics-based estimators in the typical order restrictions. When the underlying distributions are scale families, the unbiased estimators and their order-restricted estimators are shown to be minimax. The minimaxity of the restrictedly generalized Bayes estimator against the prior over the restricted space is also demonstrated in the two dimensional case. Finally, some examples and multivariate extensions are given.  相似文献   

19.
This article is concerned with the estimating problem of semiparametric varyingcoefficient partially linear regression models. By combining the local polynomial and least squares procedures Fan and Huang (2005) proposed a profile least squares estimator for the parametric component and established its asymptotic normality. We further show that the profile least squares estimator can achieve the law of iterated logarithm. Moreover, we study the estimators of the functions characterizing the non-linear part as well as the error variance. The strong convergence rate and the law of iterated logarithm are derived for them, respectively.  相似文献   

20.
In this note, we revisit the single-index model with heteroscedastic error, and recommend an estimating equation method in terms of transferring restricted least squares to unrestricted least squares: the estimator of the index parameter is asymptotically more efficient than existing estimators in the literature in the sense that it is of a smaller limiting variance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号