首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we deal with comparisons among several estimators available in situations of multicollinearity (e.g., the r-k class estimator proposed by Baye and Parker, the ordinary ridge regression (ORR) estimator, the principal components regression (PCR) estimator and also the ordinary least squares (OLS) estimator) for a misspecified linear model where misspecification is due to omission of some relevant explanatory variables. These comparisons are made in terms of the mean square error (mse) of the estimators of regression coefficients as well as of the predictor of the conditional mean of the dependent variable. It is found that under the same conditions as in the true model, the superiority of the r-k class estimator over the ORR, PCR and OLS estimators and those of the ORR and PCR estimators over the OLS estimator remain unchanged in the misspecified model. Only in the case of comparison between the ORR and PCR estimators, no definite conclusion regarding the mse dominance of one over the other in the misspecified model can be drawn.  相似文献   

2.
Koul, Susarla and Van Ryzin (1981, Ann. Statist. 9, 1276-1288) proposed a generalization of the ordinary least squares estimator in linear models with censored data. This paper uses counting processes and martingale techniques to provide a proof of the asymptotic normality of the estimator. A detailed analysis of the asymptotic variance is presented.  相似文献   

3.
We treat with the r-k class estimation in a regression model, which includes the ordinary least squares estimator, the ordinary ridge regression estimator and the principal component regression estimator as special cases of the r-k class estimator. Many papers compared total mean square error of these estimators. Sarkar (1989, Ann. Inst. Statist. Math., 41, 717–724) asserts that the results of this comparison are still valid in a misspecified linear model. We point out some confusions of Sarkar and show additional conditions under which his assertion holds.  相似文献   

4.
Summary This paper is concerned with the consistency of estimators in a single common factor analysis model when the dimension of the observed vector is not fixed. In the model several conditions on the sample sizen and the dimensionp are established for the least squares estimator (L.S.E.) to be consistent. Under some assumptions,p/n→0 is a necessary and sufficient condition that the L.S.E. converges in probability to the true value. A sufficient condition for almost sure convergence is also given.  相似文献   

5.
Summary. The standard approaches to solving overdetermined linear systems construct minimal corrections to the vector c and/or the matrix B such that the corrected system is compatible. In ordinary least squares (LS) the correction is restricted to c, while in data least squares (DLS) it is restricted to B. In scaled total least squares (STLS) [22], corrections to both c and B are allowed, and their relative sizes depend on a real positive parameter . STLS unifies several formulations since it becomes total least squares (TLS) when , and in the limit corresponds to LS when , and DLS when . This paper analyzes a particularly useful formulation of the STLS problem. The analysis is based on a new assumption that guarantees existence and uniqueness of meaningful STLS solutions for all parameters . It makes the whole STLS theory consistent. Our theory reveals the necessary and sufficient condition for preserving the smallest singular value of a matrix while appending (or deleting) a column. This condition represents a basic matrix theory result for updating the singular value decomposition, as well as the rank-one modification of the Hermitian eigenproblem. The paper allows complex data, and the equivalences in the limit of STLS with DLS and LS are proven for such data. It is shown how any linear system can be reduced to a minimally dimensioned core system satisfying our assumption. Consequently, our theory and algorithms can be applied to fully general systems. The basics of practical algorithms for both the STLS and DLS problems are indicated for either dense or large sparse systems. Our assumption and its consequences are compared with earlier approaches. Received June 2, 1999 / Revised version received July 3, 2000 / Published online July 25, 2001  相似文献   

6.
The ridge estimator of the usual linear model is generalized by the introduction of an a priori vector r and an associated positive semidefinite matrix S. It is then shown that the generalized ridge estimator can be justified in two ways: (a) by the minimization of the residual sum of squares subject to a constraint on the length, in the metric S, of the vector of differences between r and the estimated linear model coefficients, (b) by incorporating prior knowledge, r playing the role of the vector of means and S proportional to the precision matrix. Both a Bayesian and an Aitken generalized least squares frameworks are used for the latter. The properties of the new estimator are derived and compared to the ordinary least squares estimator. The new method is illustrated with different assumptions on the form of the S matrix.  相似文献   

7.
In this paper, an extension of the structured total least‐squares (STLS) approach for non‐linearly structured matrices is presented in the so‐called ‘Riemannian singular value decomposition’ (RiSVD) framework. It is shown that this type of STLS problem can be solved by solving a set of Riemannian SVD equations. For small perturbations the problem can be reformulated into finding the smallest singular value and the corresponding right singular vector of this Riemannian SVD. A heuristic algorithm is proposed. Some examples of Vandermonde‐type matrices are used to demonstrate the improved accuracy of the obtained parameter estimator when compared to other methods such as least squares (LS) or total least squares (TLS). Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

8.
The ordinary least squares estimation is based on minimization of the squared distance of the response variable to its conditional mean given the predictor variable. We extend this method by including in the criterion function the distance of the squared response variable to its second conditional moment. It is shown that this “second-order” least squares estimator is asymptotically more efficient than the ordinary least squares estimator if the third moment of the random error is nonzero, and both estimators have the same asymptotic covariance matrix if the error distribution is symmetric. Simulation studies show that the variance reduction of the new estimator can be as high as 50% for sample sizes lower than 100. As a by-product, the joint asymptotic covariance matrix of the ordinary least squares estimators for the regression parameter and for the random error variance is also derived, which is only available in the literature for very special cases, e.g. that random error has a normal distribution. The results apply to both linear and nonlinear regression models, where the random error distributions are not necessarily known.  相似文献   

9.
We present a new approach to univariate partial least squares regression (PLSR) based on directional signal-to-noise ratios (SNRs). We show how PLSR, unlike principal components regression, takes into account the actual value and not only the variance of the ordinary least squares (OLS) estimator. We find an orthogonal sequence of directions associated with decreasing SNR. Then, we state partial least squares estimators as least squares estimators constrained to be null on the last directions. We also give another procedure that shows how PLSR rebuilds the OLS estimator iteratively by seeking at each step the direction with the largest difference of signals over the noise. The latter approach does not involve any arbitrary scale or orthogonality constraints.  相似文献   

10.
Summary. The standard approaches to solving overdetermined linear systems construct minimal corrections to the data to make the corrected system compatible. In ordinary least squares (LS) the correction is restricted to the right hand side c, while in scaled total least squares (STLS) [14,12] corrections to both c and B are allowed, and their relative sizes are determined by a real positive parameter . As , the STLS solution approaches the LS solution. Our paper [12] analyzed fundamentals of the STLS problem. This paper presents a theoretical analysis of the relationship between the sizes of the LS and STLS corrections (called the LS and STLS distances) in terms of . We give new upper and lower bounds on the LS distance in terms of the STLS distance, compare these to existing bounds, and examine the tightness of the new bounds. This work can be applied to the analysis of iterative methods which minimize the residual norm, and the generalized minimum residual method (GMRES) [15] is used here to illustrate our theory. Received July 20, 2000 / Revised version received February 28, 2001 / Published online July 25, 2001  相似文献   

11.
We extend the simple linear measurement error model through the inclusion of a composite indicator by using the generalized maximum entropy estimator. A Monte Carlo simulation study is proposed for comparing the performances of the proposed estimator to his counterpart the ordinary least squares “Adjusted for attenuation”. The two estimators are compared in term of correlation with the true latent variable, standard error and root mean of squared error. Two illustrative case studies are reported in order to discuss the results obtained on the real data set, and relate them to the conclusions drawn via simulation study.  相似文献   

12.
The problem of optimal prediction in the stochastic linear regression model with infinitely many parameters is considered. We suggest a prediction method that outperforms asymptotically the ordinary least squares predictor. Moreover, if the random errors are Gaussian, the method is asymptotically minimax over ellipsoids in ?2. The method is based on a regularized least squares estimator with weights of the Pinsker filter. We also consider the case of dynamic linear regression, which is important in the context of transfer function modeling.  相似文献   

13.
The autoregressive model in a Banach space (ARB) allows to represent many continuous time processes used in practice (see, for example, D. Bosq, Linear Processes in Function Spaces: Theory and Applications, 2000, Springer, p. 150). In this Note we study an estimator of the operator in ARB(1) by the least squares method, when the operator is strictly p-integral, p]1,[, and we use Grenander's method of sieves (From U. Grenander, Abstract Inference, Wiley, 1981). We show consistency of the sieve estimator and we derive a central limit theorem for this estimator. To cite this article: F. Rachedi, C. R. Acad. Sci. Paris, Ser. I 341 (2005).  相似文献   

14.
In this paper, we propose a new estimator for a kurtosis in a multivariate nonnormal linear regression model. Usually, an estimator is constructed from an arithmetic mean of the second power of the squared sample Mahalanobis distances between observations and their estimated values. The estimator gives an underestimation and has a large bias, even if the sample size is not small. We replace this squared distance with a transformed squared norm of the Studentized residual using a monotonic increasing function. Our proposed estimator is defined by an arithmetic mean of the second power of these squared transformed squared norms with a correction term and a tuning parameter. The correction term adjusts our estimator to an unbiased estimator under normality, and the tuning parameter controls the sizes of the squared norms of the residuals. The family of our estimators includes estimators based on ordinary least squares and predicted residuals. We verify that the bias of our new estimator is smaller than usual by constructing numerical experiments.  相似文献   

15.
First, the second-order bias of the estimator of the autoregressive parameter based on the ordinary least squares residuals in a linear model with serial correlation is given. Second, the second-order expansion of the risk matrix of a generalized least squares estimator with the above estimated parameter is obtained. This expansion is the same as that based on a suitable estimator of the autoregressive parameter independent of the sample. Third, it is shown that the risk matrix of the generalized least squares estimator is asymptotically equivalent to that of the maximum likelihood estimator up to the second order. Last, a sufficient condition is given for the term due to the estimation of the autoregressive parameter in this expansion to vanish under Grenander's condition for the explanatory variates.  相似文献   

16.
In multiple linear regression model, we have presupposed assumptions (independence, normality, variance homogeneity and so on) on error term. When case weights are given because of variance heterogeneity, we can estimate efficiently regression parameter using weighted least squares estimator. Unfortunately, this estimator is sensitive to outliers like ordinary least squares estimator. Thus, in this paper, we proposed some statistics for detection of outliers in weighted least squares regression.  相似文献   

17.
18.
Dempster and Rubin (D&R) in their JRSSB paper considered the statistical error caused by data rounding in a linear regression model and compared the Sheppard correction, BRB correction and the ordinary LSE by simulations. Some asymptotic results when the rounding scale tends to 0 were also presented. In a previous research, we found that the ordinary sample variance of rounded data from normal populations is always inconsistent while the sample mean of rounded data is consistent if and only if the true mean is a multiple of the half rounding scale. In the light of these results, in this paper we further investigate the rounding errors in linear regressions. We notice that these results form the basic reasons that the Sheppard corrections perform better than other methods in DR examples and their conclusion in general cases is incorrect. Examples in which the Sheppard correction works worse than the BRB correction are also given. Furthermore, we propose a new approach to estimate the parameters, called “two-stage estimator”, and establish the consistency and asymptotic normality of the new estimators.  相似文献   

19.
The use of the jackknife method is successful in many situations. However, when the observations are from anm-dependent stationary process, the ordinary jackknife may provide an inconsistent variance estimator. It is shown in this note that this deficiency of the jackknife can be rectified and the jackknife variance estimator proposed is strongly consistent.  相似文献   

20.
A spatial autoregressive process with two parameters is investigated both in the stable and in the unstable case. It is shown that the limiting distribution of the least squares estimator of these parameters is normal and the rate of convergence is n 3/2 if one of the key parameters equals zero and n otherwise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号