首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4篇
  免费   0篇
化学   1篇
数学   3篇
  2013年   4篇
排序方式: 共有4条查询结果,搜索用时 968 毫秒
1
1.
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, for example, spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well, and also scales to larger datasets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.  相似文献   
2.
This article proposes a practical modeling approach that can accommodate a rich variety of predictors, united in a generalized linear model (GLM) setting. In addition to the usual ANOVA-type or covariatelinear (L) predictors, we consider modeling any combination of smooth additive (G) components, varying coefficient (V) components, and (discrete representations of) signal (S) components. We assume that G is, and the coefficients of V and S are, inherently smooth—projecting each of these onto B-spline bases using a modest number of equally spaced knots. Enough knots are used to ensure more flexibility than needed; further smoothness is achieved through a difference penalty on adjacent B-spline coefficients (P-splines). This linear re-expression allows all of the parameters associated with these components to be estimated simultaneously in one large GLM through penalized likelihood. Thus, we have the advantage of avoiding both the backfitting algorithm and complex knot selection schemes. We regulate the flexibility of each component through a separate penalty parameter that is optimally chosen based on cross-validation or an information criterion.  相似文献   
3.
Variable and model selection are of major concern in many statistical applications, especially in high-dimensional regression models. Boosting is a convenient statistical method that combines model fitting with intrinsic model selection. We investigate the impact of base-learner specification on the performance of boosting as a model selection procedure. We show that variable selection may be biased if the covariates are of different nature. Important examples are models combining continuous and categorical covariates, especially if the number of categories is large. In this case, least squares base-learners offer increased flexibility for the categorical covariate and lead to a preference even if the categorical covariate is noninformative. Similar difficulties arise when comparing linear and nonlinear base-learners for a continuous covariate. The additional flexibility in the nonlinear base-learner again yields a preference of the more complex modeling alternative. We investigate these problems from a theoretical perspective and suggest a framework for bias correction based on a general class of penalized least squares base-learners. Making all base-learners comparable in terms of their degrees of freedom strongly reduces the selection bias observed in naive boosting specifications. The importance of unbiased model selection is demonstrated in simulations. Supplemental materials including an application to forest health models, additional simulation results, additional theorems, and proofs for the theorems are available online.  相似文献   
4.
Baseline correction and artifact removal are important pre-processing steps in analytical chemistry. We propose a correction algorithm using a mixture model in combination with penalized regression. The model is an extension of a method recently introduced for baseline estimation in the case of one-dimensional data. The data are modeled as a smooth surface using tensor product P-splines. The weights of the P-splines regression model are computed from a mixture model where a datapoint is either allocated to the noise around the baseline, or to the artifact component. The method is broadly applicable for anisotropic smoothing of two-way data such as two-dimensional gel electrophoresis and two-dimensional chromatography data. We focus here on the application of the approach in femtosecond time-resolved spectroscopy, to eliminate strong artifact signals from the solvent.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号