首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We introduce regularized wavelet-based methods for nonlinear regression modeling when design points are not equally spaced. A crucial issue in the model building process is a choice of tuning parameters that control the smoothness of a fitted curve. We derive model selection criteria from an information-theoretic and also Bayesian approaches. Monte Carlo simulations are conducted to examine the performance of the proposed wavelet-based modeling technique.  相似文献   

2.
We introduce a nonlinear regression modeling strategy, using a regularized local likelihood method. The local likelihood method is effective for analyzing data with complex structure. It might be, however, pointed out that the stability of the local likelihood estimator is not necessarily guaranteed in the case that the structure of system is quite complex. In order to overcome this difficulty, we propose a regularized local likelihood method with a polynomial function which unites local likelihood and regularization. A crucial issue in constructing nonlinear regression models is the choice of a smoothing parameter, the degree of polynomial and a regularization parameter. In order to evaluate models estimated by the regularized local likelihood method, we derive a model selection criterion from an information-theoretic point of view. Real data analysis and Monte Carlo experiments are conducted to examine the performance of our modeling strategy.  相似文献   

3.
Based on the simplicity and calculability of polyline function, we consider, in this paper, the regularized regression learning algorithm associated with the least square loss and the set of polyline function . The target is the error analysis for the regression problem. The approach presented in the paper yields satisfactory learning rates. The rates depend on the approximation property of and on the capacity of measured by covering numbers. Under some certain conditions, the rates achieve m?4/5 log m. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
We show that if a bounded analytic semigroup on satisfies a Gaussian estimate of order and is the generator of its consistent semigroup on , then generates a -regularized group on where . We obtain the estimate of () and the -independence of , and give applications to Schrödinger operators and elliptic operators of higher order.

  相似文献   


5.
A flexible nonparametric method is proposed for classifying high- dimensional data with a complex structure. The proposed method can be regarded as an extended version of linear logistic discriminant procedures, in which the linear predictor is replaced by a radial-basis-expansion predictor. Radial basis functions with a hyperparameter are used to take the information on covariates and class labels into account; this was nearly impossible within the previously proposed hybrid learning framework. The penalized maximum likelihood estimation procedure is employed to obtain stable parameter estimates. A crucial issue in the model-construction process is the choice of a suitable model from candidates. This issue is examined from information-theoretic and Bayesian viewpoints and we employed Ando et al. (Japanese Journal of Applied Statistics, 31, 123–139, 2002)’s model evaluation criteria. The proposed method is available not only for the high-dimensional data but also for the variable selection problem. Real data analysis and Monte Carlo experiments show that our proposed method performs well in classifying future observations in practical situations. The simulation results also show that the use of the hyperparameter in the basis functions improves the prediction performance.  相似文献   

6.
传统惩罚样条回归模型中惩罚项的设置未考虑数据的空间异质性,因而对复杂数据的拟合缺乏自适应性.文章通过对径向基函数的几何意义分析,以节点两侧相邻区域内数据点的纵向极差为基础,构造局部惩罚权重向量并加入到约束回归模型的惩罚项中,构造了基于径向基的自适应惩罚样条回归模型.新模型在观测数据波动较大的区域,给予拟合曲线较小的惩罚,而在观测数据波动较小的区域,给予拟合曲线较大的惩罚,从而使拟合曲线能自适应地反映观测数据的局部变化特征.模拟和应用结果显示新模型的拟合效果显著优于传统的惩罚样条回归模型.  相似文献   

7.
High-dimensional feature selection has become increasingly crucial for seeking parsimonious models in estimation. For selection consistency, we derive one necessary and sufficient condition formulated on the notion of degree of separation. The minimal degree of separation is necessary for any method to be selection consistent. At a level slightly higher than the minimal degree of separation, selection consistency is achieved by a constrained $L_0$ -method and its computational surrogate—the constrained truncated $L_1$ -method. This permits up to exponentially many features in the sample size. In other words, these methods are optimal in feature selection against any selection method. In contrast, their regularization counterparts—the $L_0$ -regularization and truncated $L_1$ -regularization methods enable so under slightly stronger assumptions. More importantly, sharper parameter estimation/prediction is realized through such selection, leading to minimax parameter estimation. This, otherwise, is impossible in the absence of a good selection method for high-dimensional analysis.  相似文献   

8.
Complex-variable methods are used to obtain some expansions in the error in Gaussian quadrature formulae over the interval [– 1, 1]. Much of the work is based on an approach due to Stenger, and both circular and elliptical contours are used. Stenger's theorem on monotonicity of convergence of Gaussian quadrature formulae is generalized, and a number of error bounds are obtained.  相似文献   

9.
10.
本文讨论了再生核Hilbert 空间上一类广泛的正则化回归算法的学习率问题. 在分析算法的样本误差时, 我们利用了一种复加权的经验过程, 保证了方差与惩罚泛函同时被阈值控制, 从而避免了繁琐的迭代过程. 本文得到了比之前文献结果更为快速的学习率.  相似文献   

11.
Heavy-tailed noise or strongly correlated predictors often go with the multivariate linear regression model. To tackle with these problems, this paper focuses on the matrix elastic-net regularized multivariate Huber regression model. This new model possesses the grouping effect property and the robustness to heavy-tailed noise. Meanwhile, it also has the ability of reducing the negative effect of outliers due to Huber loss. Furthermore, an accelerated proximal gradient algorithm is designed to solve the proposed model. Some numerical studies including a real data analysis are dedicated to show the efficiency of our method.  相似文献   

12.
Semi-supervised learning is an emerging computational paradigm for machine learning,that aims to make better use of large amounts of inexpensive unlabeled data to improve the learning performance.While various methods have been proposed based on different intuitions,the crucial issue of generalization performance is still poorly understood.In this paper,we investigate the convergence property of the Laplacian regularized least squares regression,a semi-supervised learning algorithm based on manifold regularization.Moreover,the improvement of error bounds in terms of the number of labeled and unlabeled data is presented for the first time as far as we know.The convergence rate depends on the approximation property and the capacity of the reproducing kernel Hilbert space measured by covering numbers.Some new techniques are exploited for the analysis since an extra regularizer is introduced.  相似文献   

13.
Least-squares regularized learning algorithms for regression were well-studied in the literature when the sampling process is independent and the regularization term is the square of the norm in a reproducing kernel Hilbert space (RKHS). Some analysis has also been done for dependent sampling processes or regularizers being the qth power of the function norm (q-penalty) with 0?q?≤?2. The purpose of this article is to conduct error analysis of the least-squares regularized regression algorithm when the sampling sequence is weakly dependent satisfying an exponentially decaying α-mixing condition and when the regularizer takes the q-penalty with 0?q?≤?2. We use a covering number argument and derive learning rates in terms of the α-mixing decay, an approximation condition and the capacity of balls of the RKHS.  相似文献   

14.
Acta Mathematicae Applicatae Sinica, English Series - Recently, variable selection based on penalized regression methods has received a great deal of attention, mostly through frequentist’s...  相似文献   

15.
In this paper, we study the consistency of the regularized least-square regression in a general reproducing kernel Hilbert space. We characterize the compactness of the inclusion map from a reproducing kernel Hilbert space to the space of continuous functions and show that the capacity-based analysis by uniform covering numbers may fail in a very general setting. We prove the consistency and compute the learning rate by means of integral operator techniques. To this end, we study the properties of the integral operator. The analysis reveals that the essence of this approach is the isomorphism of the square root operator.  相似文献   

16.
This paper addresses the learning algorithm on the unit sphere. The main purpose is to present an error analysis for regression generated by regularized least square algorithms with spherical harmonics kernel. The excess error can be estimated by the sum of sample errors and regularization errors. Our study shows that by introducing a suitable spherical harmonics kernel, the regularization parameter can decrease arbitrarily fast with the sample size.  相似文献   

17.
This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in with Borel probability measure.   相似文献   

18.
蔡佳  王承 《中国科学:数学》2013,43(6):613-624
本文讨论样本依赖空间中无界抽样情形下最小二乘损失函数的系数正则化问题. 这里的学习准则与之前再生核Hilbert空间的准则有着本质差异: 核除了满足连续性和有界性之外, 不需要再满足对称性和正定性; 正则化子是函数关于样本展开系数的l2-范数; 样本输出是无界的. 上述差异给误差分析增加了额外难度. 本文的目的是在样本输出不满足一致有界的情形下, 通过l2-经验覆盖数给出误差的集中估计(concentration estimates). 通过引入一个恰当的Hilbert空间以及l2-经验覆盖数的技巧, 得到了与假设空间的容量以及与回归函数的正则性有关的较满意的学习速率.  相似文献   

19.
The main purpose of this paper is to discuss the asymptotic behaviour of the difference s q,k(P(n)) - k(q-1)/2 where s q,k (n) denotes the sum of the first k digits in the q-ary digital expansion of n and P(x) is an integer polynomial. We prove that this difference can be approximated by a Brownian motion and obtain under special assumptions on P, a Strassen type version of the law of the iterated logarithm. Furthermore, we extend these results to the joint distribution of q 1-ary and q 2-ary digital expansions where q 1 and q 2 are coprime. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

20.
A new Gaussian graphical modeling that is robustified against possible outliers is proposed. The likelihood function is weighted according to how the observation is deviated, where the deviation of the observation is measured based on its likelihood. Test statistics associated with the robustified estimators are developed. These include statistics for goodness of fit of a model. An outlying score, similar to but more robust than the Mahalanobis distance, is also proposed. The new scores make it easier to identify outlying observations. A Monte Carlo simulation and an analysis of a real data set show that the proposed method works better than ordinary Gaussian graphical modeling and some other robustified multivariate estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号