首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Lasso is a popular model selection and estimation procedure for linear models that enjoys nice theoretical properties. In this paper, we study the Lasso estimator for fitting autoregressive time series models. We adopt a double asymptotic framework where the maximal lag may increase with the sample size. We derive theoretical results establishing various types of consistency. In particular, we derive conditions under which the Lasso estimator for the autoregressive coefficients is model selection consistent, estimation consistent and prediction consistent. Simulation study results are reported.  相似文献   

2.
In this paper we study the asymptotic properties of the adaptive Lasso estimate in high-dimensional sparse linear regression models with heteroscedastic errors. It is demonstrated that model selection properties and asymptotic normality of the selected parameters remain valid but with a suboptimal asymptotic variance. A weighted adaptive Lasso estimate is introduced and investigated. In particular, it is shown that the new estimate performs consistent model selection and that linear combinations of the estimates corresponding to the non-vanishing components are asymptotically normally distributed with a smaller variance than those obtained by the “classical” adaptive Lasso. The results are illustrated in a data example and by means of a small simulation study.  相似文献   

3.
In high‐dimensional data settings where p  ? n , many penalized regularization approaches were studied for simultaneous variable selection and estimation. However, with the existence of covariates with weak effect, many existing variable selection methods, including Lasso and its generations, cannot distinguish covariates with weak and no contribution. Thus, prediction based on a subset model of selected covariates only can be inefficient. In this paper, we propose a post selection shrinkage estimation strategy to improve the prediction performance of a selected subset model. Such a post selection shrinkage estimator (PSE) is data adaptive and constructed by shrinking a post selection weighted ridge estimator in the direction of a selected candidate subset. Under an asymptotic distributional quadratic risk criterion, its prediction performance is explored analytically. We show that the proposed post selection PSE performs better than the post selection weighted ridge estimator. More importantly, it improves the prediction performance of any candidate subset model selected from most existing Lasso‐type variable selection methods significantly. The relative performance of the post selection PSE is demonstrated by both simulation studies and real‐data analysis. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Automatic model selection for partially linear models   总被引:1,自引:0,他引:1  
We propose and study a unified procedure for variable selection in partially linear models. A new type of double-penalized least squares is formulated, using the smoothing spline to estimate the nonparametric part and applying a shrinkage penalty on parametric components to achieve model parsimony. Theoretically we show that, with proper choices of the smoothing and regularization parameters, the proposed procedure can be as efficient as the oracle estimator [J. Fan, R. Li, Variable selection via nonconcave penalized likelihood and its oracle properties, Journal of American Statistical Association 96 (2001) 1348–1360]. We also study the asymptotic properties of the estimator when the number of parametric effects diverges with the sample size. Frequentist and Bayesian estimates of the covariance and confidence intervals are derived for the estimators. One great advantage of this procedure is its linear mixed model (LMM) representation, which greatly facilitates its implementation by using standard statistical software. Furthermore, the LMM framework enables one to treat the smoothing parameter as a variance component and hence conveniently estimate it together with other regression coefficients. Extensive numerical studies are conducted to demonstrate the effective performance of the proposed procedure.  相似文献   

5.
A new family of penalty functions, ie, adaptive to likelihood, is introduced for model selection in general regression models. It arises naturally through assuming certain types of prior distribution on the regression parameters. To study the stability properties of the penalized maximum‐likelihood estimator, 2 types of asymptotic stability are defined. Theoretical properties, including the parameter estimation consistency, model selection consistency, and asymptotic stability, are established under suitable regularity conditions. An efficient coordinate‐descent algorithm is proposed. Simulation results and real data analysis show that the proposed approach has competitive performance in comparison with the existing methods.  相似文献   

6.
We consider the linear regression model with Gaussian error. We estimate the unknown parameters by a procedure inspired by the Group Lasso estimator introduced in [22]. We show that this estimator satisfies a sparsity inequality, i.e., a bound in terms of the number of non-zero components of the oracle regression vector. We prove that this bound is better, in some cases, than the one achieved by the Lasso and the Dantzig selector.   相似文献   

7.

In this paper, we investigate the quantile varying coefficient model for longitudinal data, where the unknown nonparametric functions are approximated by polynomial splines and the estimators are obtained by minimizing the quadratic inference function. The theoretical properties of the resulting estimators are established, and they achieve the optimal convergence rate for the nonparametric functions. Since the objective function is non-smooth, an estimation procedure is proposed that uses induced smoothing and we prove that the smoothed estimator is asymptotically equivalent to the original estimator. Moreover, we propose a variable selection procedure based on the regularization method, which can simultaneously estimate and select important nonparametric components and has the asymptotic oracle property. Extensive simulations and a real data analysis show the usefulness of the proposed method.

  相似文献   

8.
本文研究测量误差模型的自适应LASSO(least absolute shrinkage and selection operator)变量选择和系数估计问题.首先分别给出协变量有测量误差时的线性模型和部分线性模型自适应LASSO参数估计量,在一些正则条件下研究估计量的渐近性质,并且证明选择合适的调整参数,自适应LASSO参数估计量具有oracle性质.其次讨论估计的实现算法及惩罚参数和光滑参数的选择问题.最后通过模拟和一个实际数据分析研究了自适应LASSO变量选择方法的表现,结果表明,变量选择和参数估计效果良好.  相似文献   

9.
Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l 1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online.  相似文献   

10.
Huber's contaminated model is a basic model for data with outliers. This paper aims at addressing several fundamental problems about this model. We first study its identifiability properties. Several theorems are presented to determine whether the model is identifiable for various situations. Based on these results, we discuss the problem of estimating the parameters with observations drawn from Huber's contaminated model. A definition of estimation consistency is introduced to handle the general case where the model may be unidentifiable. This consistency is a strong robustness property. After showing that existing estimators cannot be consistent in this sense, we propose a new estimator that possesses the consistency property under mild conditions. Its adaptive version, which can simultaneously possess this consistency property and optimal asymptotic efficiency, is also provided. Numerical examples show that our estimators have better overall performance than existing estimators no matter how many outliers in the data.  相似文献   

11.
This paper proposes a new approach for variable selection in partially linear errors-in-variables (EV) models for longitudinal data by penalizing appropriate estimating functions. We apply the SCAD penalty to simultaneously select significant variables and estimate unknown parameters. The rate of convergence and the asymptotic normality of the resulting estimators are established. Furthermore, with proper choice of regularization parameters, we show that the proposed estimators perform as well as the oracle procedure. A new algorithm is proposed for solving penalized estimating equation. The asymptotic results are augmented by a simulation study.  相似文献   

12.
Penalized estimation has become an established tool for regularization and model selection in regression models. A variety of penalties with specific features are available and effective algorithms for specific penalties have been proposed. But not much is available to fit models with a combination of different penalties. When modeling the rent data of Munich as in our application, various types of predictors call for a combination of a Ridge, a group Lasso and a Lasso-type penalty within one model. We propose to approximate penalties that are (semi-)norms of scalar linear transformations of the coefficient vector in generalized structured models—such that penalties of various kinds can be combined in one model. The approach is very general such that the Lasso, the fused Lasso, the Ridge, the smoothly clipped absolute deviation penalty, the elastic net and many more penalties are embedded. The computation is based on conventional penalized iteratively re-weighted least squares algorithms and hence, easy to implement. New penalties can be incorporated quickly. The approach is extended to penalties with vector based arguments. There are several possibilities to choose the penalty parameter(s). A software implementation is available. Some illustrative examples show promising results.  相似文献   

13.
A Regularized Newton-Like Method for Nonlinear PDE   总被引:1,自引:0,他引:1  
An adaptive regularization strategy for stabilizing Newton-like iterations on a coarse mesh is developed in the context of adaptive finite element methods for nonlinear PDE. Existence, uniqueness and approximation properties are known for finite element solutions of quasilinear problems assuming the initial mesh is fine enough. Here, an adaptive method is started on a coarse mesh where the finite element discretization and quadrature error produce a sequence of approximate problems with indefinite and ill-conditioned Jacobians. The methods of Tikhonov regularization and pseudo-transient continuation are related and used to define a regularized iteration using a positive semidefinite penalty term. The regularization matrix is adapted with the mesh refinements and its scaling is adapted with the iterations to find an approximate sequence of coarse-mesh solutions leading to an efficient approximation of the PDE solution. Local q-linear convergence is shown for the error and the residual in the asymptotic regime and numerical examples of a model problem illustrate distinct phases of the solution process and support the convergence theory.  相似文献   

14.
Partially linear model is a class of commonly used semiparametric models, this paper focus on variable selection and parameter estimation for partially linear models via adaptive LASSO method. Firstly, based on profile least squares and adaptive LASSO method, the adaptive LASSO estimator for partially linear models are constructed, and the selections of penalty parameter and bandwidth are discussed. Under some regular conditions, the consistency and asymptotic normality for the estimator are investigated, and it is proved that the adaptive LASSO estimator has the oracle properties. The proposed method can be easily implemented. Finally a Monte Carlo simulation study is conducted to assess the finite sample performance of the proposed variable selection procedure, results show the adaptive LASSO estimator behaves well.  相似文献   

15.
Shrinkage estimators of a partially linear regression parameter vector are constructed by shrinking estimators in the direction of the estimate which is appropriate when the regression parameters are restricted to a linear subspace. We investigate the asymptotic properties of positive Stein-type and improved pretest semiparametric estimators under quadratic loss. Under an asymptotic distributional quadratic risk criterion, their relative dominance picture is explored analytically. It is shown that positive Stein-type semiparametric estimators perform better than the usual Stein-type and least square semiparametric estimators and that an improved pretest semiparametric estimator is superior to the usual pretest semiparametric estimator. We also consider an absolute penalty type estimator for partially linear models and give a Monte Carlo simulation comparisons of positive shrinkage, improved pretest and the absolute penalty type estimators. The comparison shows that the shrinkage method performs better than the absolute penalty type estimation method when the dimension of the parameter space is much larger than that of the linear subspace.  相似文献   

16.
We consider a generalized risk process which consists of a subordinator plus a spectrally negative Lévy process. Our interest is to estimate the expected discounted penalty function (EDPF) from a set of data which is practical in the insurance framework. We construct an empirical type estimator of the Laplace transform of the EDPF and obtain it by a regularized Laplace inversion. The asymptotic behavior of the estimator under a high frequency assumption is investigated.  相似文献   

17.
The accurate estimation of a precision matrix plays a crucial role in the current age of high-dimensional data explosion. To deal with this problem, one of the prominent and commonly used techniques is the \(\ell _1\) norm (Lasso) penalization for a given loss function. This approach guarantees the sparsity of the precision matrix estimate for properly selected penalty parameters. However, the \(\ell _1\) norm penalization often fails to control the bias of obtained estimator because of its overestimation behavior. In this paper, we introduce two adaptive extensions of the recently proposed \(\ell _1\) norm penalized D-trace loss minimization method. They aim at reducing the produced bias in the estimator. Extensive numerical results, using both simulated and real datasets, show the advantage of our proposed estimators.  相似文献   

18.
In some applications of kernel density estimation the data may have a highly non-uniform distribution and be confined to a compact region. Standard fixed bandwidth density estimates can struggle to cope with the spatially variable smoothing requirements, and will be subject to excessive bias at the boundary of the region. While adaptive kernel estimators can address the first of these issues, the study of boundary kernel methods has been restricted to the fixed bandwidth context. We propose a new linear boundary kernel which reduces the asymptotic order of the bias of an adaptive density estimator at the boundary, and is simple to implement even on an irregular boundary. The properties of this adaptive boundary kernel are examined theoretically. In particular, we demonstrate that the asymptotic performance of the density estimator is maintained when the adaptive bandwidth is defined in terms of a pilot estimate rather than the true underlying density. We examine the performance for finite sample sizes numerically through analysis of simulated and real data sets.  相似文献   

19.
Regularization methods, including Lasso, group Lasso, and SCAD, typically focus on selecting variables with strong effects while ignoring weak signals. This may result in biased prediction, especially when weak signals outnumber strong signals. This paper aims to incorporate weak signals in variable selection, estimation, and prediction. We propose a two‐stage procedure, consisting of variable selection and postselection estimation. The variable selection stage involves a covariance‐insured screening for detecting weak signals, whereas the postselection estimation stage involves a shrinkage estimator for jointly estimating strong and weak signals selected from the first stage. We term the proposed method as the covariance‐insured screening‐based postselection shrinkage estimator. We establish asymptotic properties for the proposed method and show, via simulations, that incorporating weak signals can improve estimation and prediction performance. We apply the proposed method to predict the annual gross domestic product rates based on various socioeconomic indicators for 82 countries.  相似文献   

20.
In this paper we discuss the asymptotic properties of quantile processes under random censoring. In contrast to most work in this area we prove weak convergence of an appropriately standardized quantile process under the assumption that the quantile regression model is only linear in the region, where the process is investigated. Additionally, we also discuss properties of the quantile process in sparse regression models including quantile processes obtained from the Lasso and adaptive Lasso. The results are derived by a combination of modern empirical process theory, classical martingale methods and a recent result of Kato (2009).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号