首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
本文研究了多元线性同归模型岭估计的影响分析问题.利用最小二乘估计方法,获得了多元协方差阵扰动模型与原模型参数阵之间的岭估计的一些关系式,给出了度量影响大小的基于岭估计的广义Cook距离.  相似文献   

2.
In this paper, we propose a new biased estimator of the regression parameters, the generalized ridge and principal correlation estimator. We present its some properties and prove that it is superior to LSE (least squares estimator), principal correlation estimator, ridge and principal correlation estimator under MSE (mean squares error) and PMC (Pitman closeness) criterion, respectively.  相似文献   

3.
Penalized quantile regression (PQR) provides a useful tool for analyzing high-dimensional data with heterogeneity. However, its computation is challenging due to the nonsmoothness and (sometimes) the nonconvexity of the objective function. An iterative coordinate descent algorithm (QICD) was recently proposed to solve PQR with nonconvex penalty. The QICD significantly improves the computational speed but requires a double-loop. In this article, we propose an alternative algorithm based on the alternating direction method of multiplier (ADMM). By writing the PQR into a special ADMM form, we can solve the iterations exactly without using coordinate descent. This results in a new single-loop algorithm, which we refer to as the QPADM algorithm. The QPADM demonstrates favorable performance in both computational speed and statistical accuracy, particularly when the sample size n and/or the number of features p are large. Supplementary material for this article is available online.  相似文献   

4.
In ridge regression and related shrinkage methods, the ridge trace plot, a plot of estimated coefficients against a shrinkage parameter, is a common graphical adjunct to help determine a favorable trade-off of bias against precision (inverse variance) of the estimates. However, standard unidimensional versions of this plot are ill-suited for this purpose because they show only bias directly and ignore the multidimensional nature of the problem.

A generalized version of the ridge trace plot is introduced, showing covariance ellipsoids in parameter space, whose centers show bias and whose size and shape show variance and covariance, respectively, in relation to the criteria for which these methods were developed. These provide a direct visualization of both bias and precision. Even two-dimensional bivariate versions of this plot show interesting features not revealed in the standard univariate version. Low-rank versions of this plot, based on an orthogonal transformation of predictor space extend these ideas to larger numbers of predictor variables, by focusing on the dimensions in the space of predictors that are likely to be most informative about the nature of bias and precision. Two well-known datasets are used to illustrate these graphical methods. The genridge package for R implements computation and display.  相似文献   

5.
We propose an algorithm, semismooth Newton coordinate descent (SNCD), for the elastic-net penalized Huber loss regression and quantile regression in high dimensional settings. Unlike existing coordinate descent type algorithms, the SNCD updates a regression coefficient and its corresponding subgradient simultaneously in each iteration. It combines the strengths of the coordinate descent and the semismooth Newton algorithm, and effectively solves the computational challenges posed by dimensionality and nonsmoothness. We establish the convergence properties of the algorithm. In addition, we present an adaptive version of the “strong rule” for screening predictors to gain extra efficiency. Through numerical experiments, we demonstrate that the proposed algorithm is very efficient and scalable to ultrahigh dimensions. We illustrate the application via a real data example. Supplementary materials for this article are available online.  相似文献   

6.
One useful approach for fitting linear models with scalar outcomes and functional predictors involves transforming the functional data to wavelet domain and converting the data-fitting problem to a variable selection problem. Applying the LASSO procedure in this situation has been shown to be efficient and powerful. In this article, we explore two potential directions for improvements to this method: techniques for prescreening and methods for weighting the LASSO-type penalty. We consider several strategies for each of these directions which have never been investigated, either numerically or theoretically, in a functional linear regression context. We compare the finite-sample performance of the proposed methods through both simulations and real-data applications with both 1D signals and 2D image predictors. We also discuss asymptotic aspects. We show that applying these procedures can lead to improved estimation and prediction as well as better stability. Supplementary materials for this article are available online.  相似文献   

7.
A wide range of flows of practical interest occur in cylindrical geometries. In order to simulate such flows, an available compact finite‐difference simulation code [1] was adapted by introducing a mapping that expresses cylindrical coordinates as generalized coordinates. This formulation is conservative and avoids problems associated with the classical formulation of the Navier‐Stokes equations in cylindrical coordinates. The coordinate singularity treatment follows [2] and is modified for generalized coordinates. To retain high‐order numerical accuracy, a Fourier spectral method is employed in the azimuthal direction combined with mode clipping to alleviate time‐step restrictions due to a very fine grid spacing near the singularity at the axis (r = 0). An implementation of this scheme was successfully validated by a simulation of a tripolar vortex formation and by comparison with linear stability theory. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
We present first methodology for dimension reduction in regressions with predictors that, given the response, follow one-parameter exponential families. Our approach is based on modeling the conditional distribution of the predictors given the response, which allows us to derive and estimate a sufficient reduction of the predictors. We also propose a method of estimating the forward regression mean function without requiring an explicit forward regression model. Whereas nearly all existing estimators of the central subspace are limited to regressions with continuous predictors only, our proposed methodology extends estimation to regressions with all categorical or a mixture of categorical and continuous predictors. Supplementary materials including the proofs and the computer code are available from the JCGS website.  相似文献   

9.
In this paper we deal with comparisons among several estimators available in situations of multicollinearity (e.g., the r-k class estimator proposed by Baye and Parker, the ordinary ridge regression (ORR) estimator, the principal components regression (PCR) estimator and also the ordinary least squares (OLS) estimator) for a misspecified linear model where misspecification is due to omission of some relevant explanatory variables. These comparisons are made in terms of the mean square error (mse) of the estimators of regression coefficients as well as of the predictor of the conditional mean of the dependent variable. It is found that under the same conditions as in the true model, the superiority of the r-k class estimator over the ORR, PCR and OLS estimators and those of the ORR and PCR estimators over the OLS estimator remain unchanged in the misspecified model. Only in the case of comparison between the ORR and PCR estimators, no definite conclusion regarding the mse dominance of one over the other in the misspecified model can be drawn.  相似文献   

10.
We treat with the r-k class estimation in a regression model, which includes the ordinary least squares estimator, the ordinary ridge regression estimator and the principal component regression estimator as special cases of the r-k class estimator. Many papers compared total mean square error of these estimators. Sarkar (1989, Ann. Inst. Statist. Math., 41, 717–724) asserts that the results of this comparison are still valid in a misspecified linear model. We point out some confusions of Sarkar and show additional conditions under which his assertion holds.  相似文献   

11.
Abstract

When estimating a regression function or its derivatives, local polynomials are an attractive choice due to their flexibility and asymptotic performance. Seifert and Gasser proposed ridging of local polynomials to overcome problems with variance for random design while retaining their advantages. In this article we present a data-independent rule of thumb and a data-adaptive spatial choice of the ridge parameter in local linear regression. In a framework of penalized local least squares regression, the methods are generalized to higher order polynomials, to estimation of derivatives, and to multivariate designs. The main message is that ridging is a powerful tool for improving the performance of local polynomials. A rule of thumb offers drastic improvements; data-adaptive ridging brings further but modest gains in mean square error.  相似文献   

12.
The central mean and central subspaces of generalized multiple index model are the main inference targets of sufficient dimension reduction in regression. In this article, we propose an integral transform (ITM) method for estimating these two subspaces. Applying the ITM method, estimates are derived, separately, for two scenarios: (i) No distributional assumptions are imposed on the predictors, and (ii) the predictors are assumed to follow an elliptically contoured distribution. These estimates are shown to be asymptotically normal with the usual root-n convergence rate. The ITM method is different from other existing methods in that it avoids estimation of the unknown link function between the response and the predictors and it does not rely on distributional assumptions of the predictors under scenario (i) mentioned above.  相似文献   

13.
岭回归分析的SAS程序设计   总被引:4,自引:1,他引:3  
田俊 《数理统计与管理》1999,18(3):53-55,51
田俊.岭回归分析的SAS程序设计.岭回归分析方法是传统的多元回归分析方法的一个补充,在实际工作中经常使用。但是在标准统计软件SAS中没有专门的岭回归分析过程,本文介绍如何通过设置伪样品后使用SAS进行岭回归分析  相似文献   

14.
SAS6.11版岭回归分析程序设计及其实例分析   总被引:9,自引:0,他引:9  
应用岭回归分析可以解决自变量之间存在复共线性时的回归问题。本文给出了在SAS6.1 1及以上版本中实现岭回归分析的程序 ,用具体实例说明进行岭回归的方法  相似文献   

15.
本文通过例子介绍多元线性回归中自变量共线性的诊断以及使用 SAS/SATA( 6.12 )软件中的 REG等过程的增强功能处理回归变量共线性的一些方法。包括筛选变量法 ,岭回归分析法 ,主成分回归法和偏最小二乘回归法  相似文献   

16.
17.
SiZer (significant zero crossing of the derivatives) is a multiscale smoothing method for exploring trends, maxima, and minima in data. In this article, a regression spline version of SiZer is proposed in a nonparametric regression setting by the fiducial method. The number of knots for spline interpolation is used as the scale parameter of the new SiZer, which controls the smoothness of estimate. In the construction of the new SiZer, multiple testing adjustment is made to control the row-wise false discovery rate (FDR) of SiZer. This adjustment is appealing for exploratory data analysis and has potential to increase the power. A special map is also produced on a continuous scale using p-values to assess the significance of features. Simulations and a real data application are carried out to investigate the performance of the proposed SiZer, in which several comparisons with other existing SiZers are presented. Supplementary materials for this article are available online.  相似文献   

18.
本文在回归系数的岭型主相关估计的基础上,提出了广义岭型主相关估计,进一步研究其在降维估计类中方差最优性。  相似文献   

19.
Sliced inverse regression (SIR) is an important method for reducing the dimensionality of input variables. Its goal is to estimate the effective dimension reduction directions. In classification settings, SIR is closely related to Fisher discriminant analysis. Motivated by reproducing kernel theory, we propose a notion of nonlinear effective dimension reduction and develop a nonlinear extension of SIR called kernel SIR (KSIR). Both SIR and KSIR are based on principal component analysis. Alternatively, based on principal coordinate analysis, we propose the dual versions of SIR and KSIR, which we refer to as sliced coordinate analysis (SCA) and kernel sliced coordinate analysis (KSCA), respectively. In the classification setting, we also call them discriminant coordinate analysis and kernel discriminant coordinate analysis. The computational complexities of SIR and KSIR rely on the dimensionality of the input vector and the number of input vectors, respectively, while those of SCA and KSCA both rely on the number of slices in the output. Thus, SCA and KSCA are very efficient dimension reduction methods.  相似文献   

20.
The statistics literature of the past 15 years has established many favorable properties for sparse diminishing-bias regularization: techniques that can roughly be understood as providing estimation under penalty functions spanning the range of concavity between ?0 and ?1 norms. However, lasso ?1-regularized estimation remains the standard tool for industrial Big Data applications because of its minimal computational cost and the presence of easy-to-apply rules for penalty selection. In response, this article proposes a simple new algorithm framework that requires no more computation than a lasso path: the path of one-step estimators (POSE) does ?1 penalized regression estimation on a grid of decreasing penalties, but adapts coefficient-specific weights to decrease as a function of the coefficient estimated in the previous path step. This provides sparse diminishing-bias regularization at no extra cost over the fastest lasso algorithms. Moreover, our gamma lasso implementation of POSE is accompanied by a reliable heuristic for the fit degrees of freedom, so that standard information criteria can be applied in penalty selection. We also provide novel results on the distance between weighted-?1 and ?0 penalized predictors; this allows us to build intuition about POSE and other diminishing-bias regularization schemes. The methods and results are illustrated in extensive simulations and in application of logistic regression to evaluating the performance of hockey players. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号