首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
本文研究了带有固定效应的半参数面板数据模型的B样条估计.给出了非参数函数m的B样条估计的渐近偏差的表达式.我们的研究结果表明,B样条估计量的渐近偏差不依赖于工作协方差矩阵.通过蒙特卡洛模拟,证实了我们的结论.  相似文献   

2.
朱利平  於州 《中国科学A辑》2007,37(7):865-877
在研究非参数回归问题时, 降维技术是很有帮助并常常很有必要的. 在此领域, 切片逆回归(SIR)方法对于估计中心降维(CDR)子空间是很有效的. 本文提出了用最小二乘回归样条来估计SIR的核矩阵. 通过引入适当的权函数, 上述样条逼近法也能很好地用来处理异方差问题. 对于样条节点的选择在一个很大范围内, 本文证明了样条逼近方法的渐近正态性. 本质上, 这与用核光滑的结果有点类似. 此外, 本文基于SIR矩阵的特征值提出了一种修正型的BIC准则. 对于SIR和其他类似的降维方法, 这种修正型的BIC准则都可以用来决定结构维数. 通过一个实际例子说明了上述方法的实用性, 并给出了样条逼近法和其他现有方法之间的模拟比较结果.  相似文献   

3.
考虑了纵向数据半参数建模中的估计问题, 提出了参数分量的一个迭代加权偏样条最小二乘估计. 在渐近方差意义下该估计比加权偏样条最小二乘估计更加有效, 且具有渐近正态性. 另外, 给出了一个自适应方法, 该方法能保证经过有限次迭代后, 迭代过程会终止, 并且产生的估计渐近等价于使用迭代方法所能产生的最好的估计, 这些结果是Chen和Shao的结果在半参数回归上的推广.  相似文献   

4.
基于多项式样条全局光滑方法,建立函数系数线性自回归模型中系数函数的样条估计.在适当条件下,证明了系数函数多项式样条估计的相合性,并给出了它们的收敛速度.模拟例子验证了理论结果的正确性.  相似文献   

5.
陈夏  陈希孺 《中国科学A辑》2005,35(4):463-480
对广义线性模型参数的一种拟似然估计的理论给予了彻底的处理. 在该估计中,响应变量的未知的协方差阵是通过样本去估计的.证明了所定义的估计量具有下述意义上的渐近有效性:当样本量n→∞时, 该估计有渐近正态性,且其极限分布的协方差阵重合于当响应变量的协方差阵完全已知时,拟似然估计的极限分布的协方差阵.  相似文献   

6.
本文考虑误差为自回归过程的固定效应面板数据部分线性回归模型的估计.对于固定效应短时间序列面板数据,通常使用的自回归误差结构拟合方法不能得到一个一致的自回归系数估计量.因此本文提出一个替代估计并证明所提出的自回归系数估计是一致的,且该方法在任何阶的自回归误差下都是可行的.进一步,通过结合B样条近似,截面最小二乘虚拟变量(LSDV)技术和自回归误差结构的一致估计,本文使用加权截面LSDV估计参数部分和加权B样条(BS)估计非参数部分,所得到的加权截面LSDV估计量被证明是渐近正态的,且比可忽略误差的自回归结构模型更渐近有效.另外,加权BS估计量被推导出具有渐近偏差和渐近正态性.模拟研究和实际例子相应地说明了所估计程序的有限样本性.  相似文献   

7.
大维数据给传统的协方差阵估计方法带来了巨大的挑战,数据维度和噪声的影响不容忽视.首先以风险因子为自变量,对股票收益率建立线性回归模型;然后通过引入惩罚函数将取值非常接近的回归系数归为一组,近而来估计大维数据的协方差阵,提出了基于回归聚类算法的分块模型(BM-CAR),模型克服了传统的稀疏协方差阵估计的弊端.通过模拟和实证研究发现:较因子协方差阵估计方法而言,BM-CAR明显提高了大维协方差阵的估计效率;并且将其应用在投资组合时,投资者获得了更高的收益和经济福利.  相似文献   

8.
本文考虑多元部分线性回归模型的估计问题,得到了该模型参数的最小二乘估计和非参数函数的B-样条估计,并证明了参数估计的渐近正态性,给出了非参数函数估计的最优收敛速度.  相似文献   

9.
考虑一般的分块半相依线性回归(SUR)模型及其相应的简约模型,给出简约模型下未知回归系数及其可估函数的协方差改进估计仍是分块SUR模型下相应参数的协方差改进估计的一个充要条件.  相似文献   

10.
协方差改进法与半相依回归的参数估计   总被引:6,自引:1,他引:5  
对于由两个误差项相关的线性回归方程组成的系统,本文应用协方差改进法获得了参数的一个迭代估计序列。我们证明了当协方差阵已知时,该估计序列处处收敛到最佳线性无偏估计,且它们的协方差阵在矩阵偏序意义下单调下降收敛到最佳线性无偏估计的协方差阵,该估计序列具有Pitman准则下的优良性。当协方差阵未知时,我们证明了用协方差阵的无限制估计所产生的两步估计具有无偏性、相合性和渐近正态性。在一定意义下,本文的估计优于文献中已有的一些估计。本文的结果也显示了协方差改进方法的有效性。  相似文献   

11.
In this paper, we study the local asymptotic behavior of the regression spline estimator in the framework of marginal semiparametric model. Similarly to Zhu, Fung and He (2008), we give explicit expression for the asymptotic bias of regression spline estimator for nonparametric function f. Our results also show that the asymptotic bias of the regression spline estimator does not depend on the working covariance matrix, which distinguishes the regression splines from the smoothing splines and the seemingly u...  相似文献   

12.
In this article, we propose a new method of bias reduction in nonparametric regression estimation. The proposed new estimator has asymptotic bias order h4, where h is a smoothing parameter, in contrast to the usual bias order h2 for the local linear regression. In addition, the proposed estimator has the same order of the asymptotic variance as the local linear regression. Our proposed method is closely related to the bias reduction method for kernel density estimation proposed by Chung and Lindsay (2011). However, our method is not a direct extension of their density estimate, but a totally new one based on the bias cancelation result of their proof.  相似文献   

13.
Differenced estimators of variance bypass the estimation of regression function and thus are simple to calculate. However, there exist two problems: most differenced estimators do not achieve the asymptotic optimal rate for the mean square error; for finite samples the estimation bias is also important and not further considered. In this paper, we estimate the variance as the intercept in a linear regression with the lagged Gasser-type variance estimator as dependent variable. For the equidistant design, our estimator is not only \(n^{1/2}\)-consistent and asymptotically normal, but also achieves the optimal bound in terms of estimation variance with less asymptotic bias. Simulation studies show that our estimator has less mean square error than some existing differenced estimators, especially in the cases of immense oscillation of regression function and small-sized sample.  相似文献   

14.
In this paper, we deal with the semi‐parametric estimation of the extreme value index, an important parameter in extreme value analysis. It is well known that many classic estimators, such as the Hill estimator, reveal a strong bias. This problem motivated the study of two classes of kernel estimators. Those classes generalize the classical Hill estimator and have a tuning parameter that enables us to modify the asymptotic mean squared error and eventually to improve their efficiency. Since the improvement in efficiency is not very expressive, we also study new reduced bias estimators based on the two classes of kernel statistics. Under suitable conditions, we prove their asymptotic normality. Moreover, an asymptotic comparison, at optimal levels, shows that the new classes of reduced bias estimators are more efficient than other reduced bias estimator from the literature. An illustration of the finite sample behaviour of the kernel reduced‐bias estimators is also provided through the analysis of a data set in the field of insurance.  相似文献   

15.
本文研究了删失数据半参数回归模型的渐近正态性问题.利用样条光顺和合成数据的方法,获得了参数β、非参数h(t)的样条估计量,以及参数估计量的渐近正态性,推广了完全数据情形的相应结果[4].  相似文献   

16.
The ordinary least squares estimation is based on minimization of the squared distance of the response variable to its conditional mean given the predictor variable. We extend this method by including in the criterion function the distance of the squared response variable to its second conditional moment. It is shown that this “second-order” least squares estimator is asymptotically more efficient than the ordinary least squares estimator if the third moment of the random error is nonzero, and both estimators have the same asymptotic covariance matrix if the error distribution is symmetric. Simulation studies show that the variance reduction of the new estimator can be as high as 50% for sample sizes lower than 100. As a by-product, the joint asymptotic covariance matrix of the ordinary least squares estimators for the regression parameter and for the random error variance is also derived, which is only available in the literature for very special cases, e.g. that random error has a normal distribution. The results apply to both linear and nonlinear regression models, where the random error distributions are not necessarily known.  相似文献   

17.
It is well known that specifying a covariance matrix is difficult in the quantile regression with longitudinal data. This paper develops a two step estimation procedure to improve estimation efficiency based on the modified Cholesky decomposition. Specifically, in the first step, we obtain the initial estimators of regression coefficients by ignoring the possible correlations between repeated measures. Then, we apply the modified Cholesky decomposition to construct the covariance models and obtain the estimator of within-subject covariance matrix. In the second step, we construct unbiased estimating functions to obtain more efficient estimators of regression coefficients. However, the proposed estimating functions are discrete and non-convex. We utilize the induced smoothing method to achieve the fast and accurate estimates of parameters and their asymptotic covariance. Under some regularity conditions, we establish the asymptotically normal distributions for the resulting estimators. Simulation studies and the longitudinal progesterone data analysis show that the proposed approach yields highly efficient estimators.  相似文献   

18.
A great deal of effort has been devoted to the inference of additive model in the last decade. Among existing procedures, the kernel type are too costly to implement for high dimensions or large sample sizes, while the spline type provide no asymptotic distribution or uniform convergence. We propose a one step backfitting estimator of the component function in an additive regression model, using spline estimators in the first stage followed by kernel/local linear estimators. Under weak conditions, the proposed estimator’s pointwise distribution is asymptotically equivalent to an univariate kernel/local linear estimator, hence the dimension is effectively reduced to one at any point. This dimension reduction holds uniformly over an interval under assumptions of normal errors. Monte Carlo evidence supports the asymptotic results for dimensions ranging from low to very high, and sample sizes ranging from moderate to large. The proposed confidence band is applied to the Boston housing data for linearity diagnosis. Supported in part by NSF awards DMS 0405330, 0706518, BCS 0308420 and SES 0127722.  相似文献   

19.
Nonparametric regression estimator based on locally weighted least squares fitting has been studied by Fan and Ruppert and Wand. The latter paper also studies, in the univariate case, nonparametric derivative estimators given by a locally weighted polynomial fitting. Compared with traditional kernel estimators, these estimators are often of simpler form and possess some better properties. In this paper, we develop current work on locally weighted regression and generalize locally weighted polynomial fitting to the estimation of partial derivatives in a multivariate regression context. Specifically, for both the regression and partial derivative estimators we prove joint asymptotic normality and derive explicit asymptotic expansions for their conditional bias and conditional convariance matrix (given observations of predictor variables) in each of the two important cases of local linear fit and local quadratic fit.  相似文献   

20.
It is well known that spline smoothing estimator relates to the Bayesian estimate under partially informative normal prior.In this paper,we derive the conditions for the propriety of the posterior in the nonparametric mixed efects model under this class of partially informative normal prior for fxed efect with inverse gamma priors on the variance components and hierarchical priors for covariance matrix of random efect,then we explore the Gibbs sampling procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号