共查询到18条相似文献,搜索用时 328 毫秒
1.
2.
主要研究因变量存在缺失且协变量部分包含测量误差情形下,如何对变系数部分线性模型同时进行参数估计和变量选择.我们利用插补方法来处理缺失数据,并结合修正的profile最小二乘估计和SCAD惩罚对参数进行估计和变量选择.并且证明所得的估计具有渐近正态性和Oracle性质.通过数值模拟进一步研究所得估计的有限样本性质. 相似文献
3.
研究了残差自回归半参数模型的参数估计,运用广义最小二乘法估计了参数部分.用随机模拟说明了运用广义最小二乘(GLSE)估计出的参数部分优于运用普通最小二乘法(OKSE)得到的估计. 相似文献
4.
生长曲线模型是一个典型的多元线性模型,
在现代统计学上占有重要地位. 文章首先基于Potthoff-Roy变换后的生长曲线模型,
采用自适应LASSO为惩罚函数给出了参数矩阵的惩罚最小二乘估计,
实现了变量的选择. 其次, 基于局部渐近二次估计,
对生长曲线模型的惩罚最小二乘估计给出了统一的近似估计表达式. 接着,
讨论了经过Potthoff-Roy变换后模型的惩罚最小二乘估计,
证明了自适应LASSO具有Oracle性质. 最后对几种变量选择方法进行了数据模拟.
结果表明自适应LASSO效果比较好. 另外, 综合考虑,
Potthoff-Roy变换优于拉直变换. 相似文献
5.
作为部分线性模型和可加模型的推广,半参数可加模型在统计建模中应用广泛.考虑这类半参数模型在线性部分自变量存在共线性时的估计问题.基于Profile最小二乘方法,提出了参数分量的广义Profile-Liu估计,并给出了该估计量的偏和方差以及均方误差.最后利用数值模拟验证了所提方法的有效性. 相似文献
6.
7.
生存数据经过未知的单调变换后等于协变量的线性函数加上随机误差, 随机误差的分布函数已知或是带未知参数的已知函数\bd 本文先给出未知单调变换的一个相合估计, 再对删失数据做变换, 在此基础上给出了协变量系数的最小二乘估计, 并讨论它的大样本性质. 相似文献
8.
本文将半参数线性混合效应模型推广应用到一类具有零膨胀的纵向数据或集群数据的研究中,提出了一类新的半参数混合效应模型,然后利用广义交叉核实法选取光滑参数,通过最大惩罚似然函数方法与EM算法给出了模型参数部分与非参数部分的估计方法,最后,通过模拟和实例说明了本文方法的有效性. 相似文献
9.
10.
11.
Wang-li Xu 《应用数学学报(英文版)》2006,22(2):345-352
The issue of selection of bandwidth in kernel smoothing method is considered within the context of partially linear models, hi this paper, we study the asymptotic behavior of the bandwidth choice based on generalized cross-validation (CCV) approach and prove that this bandwidth choice is asymptotically optimal. Numerical simulation are also conducted to investigate the empirical performance of generalized cross-valldation. 相似文献
12.
《Journal of computational and graphical statistics》2013,22(4):735-757
Penalized splines, or P-splines, are regression splines fit by least-squares with a roughness penalty.P-splines have much in common with smoothing splines, but the type of penalty used with a P-spline is somewhat more general than for a smoothing spline. Also, the number and location of the knots of a P-spline is not fixed as with a smoothing spline. Generally, the knots of a P-spline are at fixed quantiles of the independent variable and the only tuning parameters to choose are the number of knots and the penalty parameter. In this article, the effects of the number of knots on the performance of P-splines are studied. Two algorithms are proposed for the automatic selection of the number of knots. The myopic algorithm stops when no improvement in the generalized cross-validation statistic (GCV) is noticed with the last increase in the number of knots. The full search examines all candidates in a fixed sequence of possible numbers of knots and chooses the candidate that minimizes GCV.The myopic algorithm works well in many cases but can stop prematurely. The full-search algorithm worked well in all examples examined. A Demmler–Reinsch type diagonalization for computing univariate and additive P-splines is described. The Demmler–Reinsch basis is not effective for smoothing splines because smoothing splines have too many knots. For P-splines, however, the Demmler–Reinsch basis is very useful for super-fast generalized cross-validation. 相似文献
13.
Wenjiang J. Fu 《Journal of computational and graphical statistics》2013,22(3):397-416
Abstract Bridge regression, a special family of penalized regressions of a penalty function Σ|βj|γ with γ ≤ 1, considered. A general approach to solve for the bridge estimator is developed. A new algorithm for the lasso (γ = 1) is obtained by studying the structure of the bridge estimators. The shrinkage parameter γ and the tuning parameter λ are selected via generalized cross-validation (GCV). Comparison between the bridge model (γ ≤ 1) and several other shrinkage models, namely the ordinary least squares regression (λ = 0), the lasso (γ = 1) and ridge regression (γ = 2), is made through a simulation study. It is shown that the bridge regression performs well compared to the lasso and ridge regression. These methods are demonstrated through an analysis of a prostate cancer data. Some computational advantages and limitations are discussed. 相似文献
14.
We consider the use ofB-spline nonparametric regression models estimated by the maximum penalized likelihood method for extracting information from
data with complex nonlinear structure. Crucial points inB-spline smoothing are the choices of a smoothing parameter and the number of basis functions, for which several selectors
have been proposed based on cross-validation and Akaike information criterion known as AIC. It might be however noticed that
AIC is a criterion for evaluating models estimated by the maximum likelihood method, and it was derived under the assumption
that the ture distribution belongs to the specified parametric model. In this paper we derive information criteria for evaluatingB-spline nonparametric regression models estimated by the maximum penalized likelihood method in the context of generalized
linear models under model misspecification. We use Monte Carlo experiments and real data examples to examine the properties
of our criteria including various selectors proposed previously. 相似文献
15.
The seamless-L_0(SELO) penalty is a smooth function on [0, ∞) that very closely resembles the L_0 penalty, which has been demonstrated theoretically and practically to be effective in nonconvex penalization for variable selection. In this paper, we first generalize SELO to a class of penalties retaining good features of SELO, and then propose variable selection and estimation in linear models using the proposed generalized SELO(GSELO) penalized least squares(PLS) approach. We show that the GSELO-PLS procedure possesses the oracle property and consistently selects the true model under some regularity conditions in the presence of a diverging number of variables. The entire path of GSELO-PLS estimates can be efficiently computed through a smoothing quasi-Newton(SQN) method. A modified BIC coupled with a continuation strategy is developed to select the optimal tuning parameter. Simulation studies and analysis of a clinical data are carried out to evaluate the finite sample performance of the proposed method. In addition, numerical experiments involving simulation studies and analysis of a microarray data are also conducted for GSELO-PLS in the high-dimensional settings. 相似文献
16.
Chong Gu 《Journal of computational and graphical statistics》2013,22(2):169-179
Abstract This article describes an appropriate way of implementing the generalized cross-validation method and some other least-squares-based smoothing parameter selection methods in penalized likelihood regression problems, and explains the rationales behind it. Simulations of limited scale are conducted to back up the semitheoretical analysis. 相似文献
17.
In this paper,the authors investigate three aspects of statistical inference for the partially linear regression models where some covariates are measured with errors.Firstly, a bandwidth selection procedure is proposed,which is a combination of the differencebased technique and GCV method.Secondly,a goodness-of-fit test procedure is proposed, which is an extension of the generalized likelihood technique.Thirdly,a variable selection procedure for the parametric part is provided based on the nonconcave penalization and corrected profile least squares.Same as"Variable selection via nonconcave penalized likelihood and its oracle properties"(J.Amer.Statist.Assoc.,96,2001,1348-1360),it is shown that the resulting estimator has an oracle property with a proper choice of regularization parameters and penalty function.Simulation studies are conducted to illustrate the finite sample performances of the proposed procedures. 相似文献
18.
In maximum penalized or regularized methods, it is important to select a tuning parameter appropriately. This paper proposes
a direct plug-in method for tuning parameter selection. The tuning parameters selected using a generalized information criterion
(Konishi and Kitagawa, Biometrika, 83, 875–890, 1996) and cross-validation (Stone, Journal of the Royal Statistical Society, Series B, 58, 267–288, 1974) are shown to be asymptotically equivalent to those selected using the proposed method, from the perspective
of estimation of an optimal tuning parameter. Because of its directness, the proposed method is superior to the two selection
methods mentioned above in terms of computational cost. Some numerical examples which contain the penalized spline generalized
linear model regressions are provided. 相似文献