首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
线性中位数回归中的变量选择   总被引:1,自引:0,他引:1  
吴耀华 《数学学报》1990,33(1):78-86
在很多情况下,我们只对线性中位数回归模型中选择重要变量感兴趣。在此文中,我们给出了一个变量选择方法,并证明了它的强相合性。  相似文献   

2.
半参数变量含误差函数关系模型的小波估计   总被引:10,自引:0,他引:10  
本文研究半参数变量含误差函数关系模型,应用小波估计法和全最小二乘法得出未知参数和未知函数的估计,在一般的条件下,证明了估计的强相合性、一致强相合性,并给出了误差方差估计的强收敛速度。  相似文献   

3.
NA样本下部分线性模型中估计的强相合性   总被引:9,自引:0,他引:9  
考虑回归模型:yi=xiβ+g(ti)+σiei,1<i<n,其中σ_i~2=f(ui),(xi,ti,ui)是固定非随机设计点列,f(·)和g(·)是未知函数,β是待估参数,误差{ei}为NA变量.我们对β的最小二乘估计βn和加权最小二乘估计Bn,在适当的条件下得到了它们的强相合性.  相似文献   

4.
李永明  杨善朝 《数学杂志》2004,24(6):601-606
在NA相依样本条件下,对未知分布函数F(x)的递归核估计进行研究,在适当的条件下,得到了估计的r^-阶平均相合速度,逐点强相合和一致强相合速度,作为应用,讨论了平均剩余寿命函数估计的相合速度。  相似文献   

5.
本文考虑了纵向数据线性EV模型的变量选择.基于二次推断函数方法和压缩方法的思想提出了一种新的偏差校正的变量选择方法.在选择适当的调整参数下,我们证明了所得到的估计量的相合性和渐近正态性.最后通过模拟研究验证了所提出的变量选择方法的有限样本性质.  相似文献   

6.
固定设计的半参数函数关系模型   总被引:2,自引:0,他引:2  
本文在固定设计下研究半参数变量含误差模型,利用权函数和加权最小二乘法得出未知参数的估计,在一定的条件下证明了估计的强相合性。  相似文献   

7.
李英华  秦永松 《数学研究》2008,41(4):426-433
在响应变量满足MAR缺失机制下,我们分别研究了基于观察到的完全样本数据对、基于固定补足后的“完全洋本”和基于分数线性回归填补后的“完全洋本”得到的回归系数的最小二乘估计的弱相合性、强相合性及渐近正态性,我们还通过数值模拟,比较了基于上述估计得到的β的置信区间的优劣。  相似文献   

8.
何其祥 《应用数学》2007,20(2):427-432
本文研究了当协变量为区间数据时的线性模型,通过构造区间数据变量的条件均值,得到了回归参数的估计,当协变量的分布已知时,证明了估计的无偏性与强相合性.时协变量的分布未知的情形也作了讨论.文中还作了若干模拟计算,从模拟的结果不难发现,利用本文提出的方法所获得的估计简便且具有较高的精度.  相似文献   

9.
本文在-混合误差下讨论Priestley,M.B.和Chao,M.T[1]提出的一类非参数回归函数加权核估计的相合性。在较弱的条件下证明了它的完全收敛性和强相合性。这些结论改进了现有的独立情形和相依情形的相应结论。  相似文献   

10.
ψ—混合误差下非参数回归函数加权核估计的相合性   总被引:15,自引:0,他引:15  
本文在ψ-混合误差下讨论Priestley,M.B.和Chao,M.T.[1]提出的一类非参数回归函数加权核估计的相合性。在较弱的条件下证明了它的完全收敛性和强相合性。这些结论改进了现有的独立情形和相依情形的相应结论。  相似文献   

11.
With uncorrelated Gaussian factors extended to mutually independent factors beyond Gaussian, the conventional factor analysis is extended to what is recently called independent factor analysis. Typically, it is called binary factor analysis (BFA) when the factors are binary and called non-Gaussian factor analysis (NFA) when the factors are from real non-Gaussian distributions. A crucial issue in both BFA and NFA is the determination of the number of factors. In the literature of statistics, there are a number of model selection criteria that can be used for this purpose. Also, the Bayesian Ying-Yang (BYY) harmony learning provides a new principle for this purpose. This paper further investigates BYY harmony learning in comparison with existing typical criteria, including Akaik’s information criterion (AIC), the consistent Akaike’s information criterion (CAIC), the Bayesian inference criterion (BIC), and the cross-validation (CV) criterion on selection of the number of factors. This comparative study is made via experiments on the data sets with different sample sizes, data space dimensions, noise variances, and hidden factors numbers. Experiments have shown that for both BFA and NFA, in most cases BIC outperforms AIC, CAIC, and CV while the BYY criterion is either comparable with or better than BIC. In consideration of the fact that the selection by these criteria has to be implemented at the second stage based on a set of candidate models which have to be obtained at the first stage of parameter learning, while BYY harmony learning can provide not only a new class of criteria implemented in a similar way but also a new family of algorithms that perform parameter learning at the first stage with automated model selection, BYY harmony learning is more preferred since computing costs can be saved significantly.  相似文献   

12.
在线性回归模型建模中, 回归自变量选择是一个受到广泛关注、文献众多, 具有很强的理论和实际意义的问题. 回归自变量选择子集的相合性是其中一个重要问题, 如果某种自变量选择方法选择的子集在样本量趋于无穷时是相合的, 而且预测均方误差较小, 则这种方法是可取的. 利用BIC准则可以挑选相合的自变量子集, 但是在自变量个数很多时计算量过大; 适应lasso方法具有较高计算效率, 也能找到相合的自变量子集; 本文提出一种更简单的自变量选择方法, 只需要计算两次普通线性回归: 第一次进行全集回归, 得到全集的回归系数估计, 然后利用这些回归系数估计挑选子集, 然后只要在挑选的自变量子集上再进行一次普通线性回归就得到了回归结果. 考虑如下的回归模型: 其中回归系数中非零分量下标的集合为, 设是本文方法选择的自变量子集下标集合, 是本文方法估计的回归系数(未选中的自变量对应的系数为零), 本文证明了, 在适当条件下, 其中表示的 分量下标在中的元素的组成的向量, 是误差方差, 是与 矩阵极限有关的矩阵和常数. 数值模拟结果表明本文方法具有很好的中小样本性质.  相似文献   

13.
The data driven Neyman statistic consists of two elements: a score statistic in a finite dimensional submodel and a selection rule to determine the best fitted submodel. For instance, Schwarz BIC and Akaike AIC rules are often applied in such constructions. For moderate sample sizes AIC is sensitive in detecting complex models, while BIC works well for relatively simple structures. When the sample size is moderate, the choice of selection rule for determining a best fitted model from a number of models has a substantial influence on the power of the related data driven Neyman test. This paper proposes a new solution, in which the type of penalty (AIC or BIC) is chosen on the basis of the data. The resulting refined data driven test combines the advantages of these two selection rules.  相似文献   

14.
The methods to minimize AIC or BIC criterion function for selection of regression variables are considered. The main calculations of some of these methods are completed economically and recursively. The methods are shown to be of strong consistency or overconsistency to the true model.Institute of Applied Mathematics, Academia Sinica  相似文献   

15.
The generalized information criterion (GIC) proposed by Rao and Wu [A strongly consistent procedure for model selection in a regression problem, Biometrika 76 (1989) 369-374] is a generalization of Akaike's information criterion (AIC) and the Bayesian information criterion (BIC). In this paper, we extend the GIC to select linear mixed-effects models that are widely applied in analyzing longitudinal data. The procedure for selecting fixed effects and random effects based on the extended GIC is provided. The asymptotic behavior of the extended GIC method for selecting fixed effects is studied. We prove that, under mild conditions, the selection procedure is asymptotically loss efficient regardless of the existence of a true model and consistent if a true model exists. A simulation study is carried out to empirically evaluate the performance of the extended GIC procedure. The results from the simulation show that if the signal-to-noise ratio is moderate or high, the percentages of choosing the correct fixed effects by the GIC procedure are close to one for finite samples, while the procedure performs relatively poorly when it is used to select random effects.  相似文献   

16.
The efficacy of family-based approaches to mixture model-based clustering and classification depends on the selection of parsimonious models. Current wisdom suggests the Bayesian information criterion (BIC) for mixture model selection. However, the BIC has well-known limitations, including a tendency to overestimate the number of components as well as a proclivity for underestimating, often drastically, the number of components in higher dimensions. While the former problem might be soluble by merging components, the latter is impossible to mitigate in clustering and classification applications. In this paper, a LASSO-penalized BIC (LPBIC) is introduced to overcome this problem. This approach is illustrated based on applications of extensions of mixtures of factor analyzers, where the LPBIC is used to select both the number of components and the number of latent factors. The LPBIC is shown to match or outperform the BIC in several situations.  相似文献   

17.
We employ a statistical criterion (out-of-sample hit rate) and a financial market measure (portfolio performance) to compare the forecasting accuracy of three model selection approaches: Bayesian information criterion (BIC), model averaging, and model mixing. While the more recent approaches of model averaging and model mixing surpass the Bayesian information criterion in their out-of-sample hit rates, the predicted portfolios from these new approaches do not significantly outperform the portfolio obtained via the BIC subset selection method.  相似文献   

18.
This paper discusses the topic of model selection for finite-dimensional normal regression models. We compare model selection criteria according to prediction errors based upon prediction with refitting, and prediction without refitting. We provide a new lower bound for prediction without refitting, while a lower bound for prediction with refitting was given by Rissanen. Moreover, we specify a set of sufficient conditions for a model selection criterion to achieve these bounds. Then the achievability of the two bounds by the following selection rules are addressed: Rissanen's accumulated prediction error criterion (APE), his stochastic complexity criterion, AIC, BIC and the FPE criteria. In particular, we provide upper bounds on overfitting and underfitting probabilities needed for the achievability. Finally, we offer a brief discussion on the issue of finite-dimensional vs. infinite-dimensional model assumptions.Support from the National Science Foundation, grant DMS 8802378 and support from ARO, grant DAAL03-91-G-007 to B. Yu during the revision are gratefully acknowledged.  相似文献   

19.
The minimax concave penalty (MCP) has been demonstrated theoretically and practically to be effective in nonconvex penalization for variable selection and parameter estimation. In this paper, we develop an efficient alternating direction method of multipliers (ADMM) with continuation algorithm for solving the MCP-penalized least squares problem in high dimensions. Under some mild conditions, we study the convergence properties and the Karush–Kuhn–Tucker (KKT) optimality conditions of the proposed method. A high-dimensional BIC is developed to select the optimal tuning parameters. Simulations and a real data example are presented to illustrate the efficiency and accuracy of the proposed method.  相似文献   

20.
Fast stepwise procedures of selection of variables by using AIC and BIC criteria are proposed in this paper. We shall use a short name “FSP” for these new procedures. FSP are similar to the well-known stepwise regression procedures in computing steps. But FSP have two advantages. One of these advantages is that FSP are definitely convergent with a faster rate in finite computing steps. Another advantage is that FSP can be used for large number of candidate variables. In this paper we also show some asymptotic properties of FSP, and some simulation results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号