首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
To achieve robustness against the outliers or heavy-tailed sampling distribution, we consider an Ivanov regularized empirical risk minimization scheme associated with a modified Huber's loss for nonparametric regression in reproducing kernel Hilbert space. By tuning the scaling and regularization parameters in accordance with the sample size, we develop nonasymptotic concentration results for such an adaptive estimator. Specifically, we establish the best convergence rates for prediction error when the conditional distribution satisfies a weak moment condition.  相似文献   

2.
This paper proposes a method to estimate the conditional quantile function using an epsilon-insensitive loss in a reproducing kernel Hilbert space. When choosing a smoothing parameter in nonparametric frameworks, it is necessary to evaluate the complexity of the model. In this regard, we provide a simple formula for computing an effective number of parameters when implementing an epsilon-insensitive loss. We also investigate the effects of the epsilon-insensitive loss.  相似文献   

3.
4.
5.
6.
7.
In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space. We investigate a class of spectral/regularized algorithms, including ridge regression, principal component regression, and gradient methods. We prove optimal, high-probability convergence results in terms of variants of norms for the studied algorithms, considering a capacity assumption on the hypothesis space and a general source condition on the target function. Consequently, we obtain almost sure convergence results with optimal rates. Our results improve and generalize previous results, filling a theoretical gap for the non-attainable cases.  相似文献   

8.
In this paper, we consider unregularized online learning algorithms in a Reproducing Kernel Hilbert Space (RKHS). Firstly, we derive explicit convergence rates of the unregularized online learning algorithms for classification associated with a general α-activating loss (see Definition 1 below). Our results extend and refine the results in [30] for the least square loss and the recent result [3] for the loss function with a Lipschitz-continuous gradient. Moreover, we establish a very general condition on the step sizes which guarantees the convergence of the last iterate of such algorithms. Secondly, we establish, for the first time, the convergence of the unregularized pairwise learning algorithm with a general loss function and derive explicit rates under the assumption of polynomially decaying step sizes. Concrete examples are used to illustrate our main results. The main techniques are tools from convex analysis, refined inequalities of Gaussian averages [5], and an induction approach.  相似文献   

9.
In the paper, a reproducing kernel method of solving singular integral equations (SIE) with cosecant kernel is proposed. For solving SIE, difficulties lie in its singular term. In order to remove singular term of SIE, an equivalent transformation is made. Compared with known investigations, its advantages are that the representation of exact solution is obtained in a reproducing kernel Hilbert space and accuracy in numerical computation is higher. On the other hand, the representation of reproducing kernel becomes simple by improving the definition of traditional inner product and requirements for image space of operators are weakened comparing with traditional reproducing kernel method. The final numerical experiments illustrate the method is efficient.  相似文献   

10.
This paper deals with a nonparametric estimation of conditional quantile regression when the explanatory variable X takes its values in a bounded subspace of a functional space X and the response Y takes its values in a compact of the space Y?R. The functional observations, X1,…,Xn, are projected onto a finite dimensional subspace having a suitable orthonormal system. The Xi’s will be characterized by their coordinates in this basis. We perform the Support Vector Machine Quantile Regression approach in finite dimension with the selected coefficients. Then we establish weak consistency of this estimator. The various parameters needed for the construction of this estimator are automatically selected by data-splitting and by penalized empirical risk minimization.  相似文献   

11.
本文讨论了再生核Hilbert 空间上一类广泛的正则化回归算法的学习率问题. 在分析算法的样本误差时, 我们利用了一种复加权的经验过程, 保证了方差与惩罚泛函同时被阈值控制, 从而避免了繁琐的迭代过程. 本文得到了比之前文献结果更为快速的学习率.  相似文献   

12.
Analysis of Support Vector Machines Regression   总被引:1,自引:0,他引:1  
Support vector machines regression (SVMR) is a regularized learning algorithm in reproducing kernel Hilbert spaces with a loss function called the ε-insensitive loss function. Compared with the well-understood least square regression, the study of SVMR is not satisfactory, especially the quantitative estimates of the convergence of this algorithm. This paper provides an error analysis for SVMR, and introduces some recently developed methods for analysis of classification algorithms such as the projection operator and the iteration technique. The main result is an explicit learning rate for the SVMR algorithm under some assumptions. Research supported by NNSF of China No. 10471002, No. 10571010 and RFDP of China No. 20060001010.  相似文献   

13.
考虑了再生核希尔伯特空间连续线性泛函范数的表示,得到了用其范数平方等于该线性泛函连续两次作于再生核的简明表示.对于常见的Sobolev-Hibert空间而言,其再生核则可用截幂函数来表示,从而得到Sobolev-Hibert空间上连续线性泛函范数的简洁表示,以新视角解释和简化了文献中的现有结果.  相似文献   

14.
Empirical Bayes (EB) estimation of the parameter vector =(,2) in a multiple linear regression modelY=X+ is considered, where is the vector of regression coefficient, N(0,2 I) and 2 is unknown. In this paper, we have constructed the EB estimators of by using the kernel estimation of multivariate density function and its partial derivatives. Under suitable conditions it is shown that the convergence rates of the EB estimators areO(n -(k-1)(k-2)/k(2k+p+1)), where the natural numberk3, 1/3<<1, andp is the dimension of vector .The project is supported by the National Natural Science Foundation of China.  相似文献   

15.
Selecting important features in nonlinear kernel spaces is a difficult challenge in both classification and regression problems. This article proposes to achieve feature selection by optimizing a simple criterion: a feature-regularized loss function. Features within the kernel are weighted, and a lasso penalty is placed on these weights to encourage sparsity. This feature-regularized loss function is minimized by estimating the weights in conjunction with the coefficients of the original classification or regression problem, thereby automatically procuring a subset of important features. The algorithm, KerNel Iterative Feature Extraction (KNIFE), is applicable to a wide variety of kernels and high-dimensional kernel problems. In addition, a modification of KNIFE gives a computationally attractive method for graphically depicting nonlinear relationships between features by estimating their feature weights over a range of regularization parameters. The utility of KNIFE in selecting features through simulations and examples for both kernel regression and support vector machines is demonstrated. Feature path realizations also give graphical representations of important features and the nonlinear relationships among variables. Supplementary materials with computer code and an appendix on convergence analysis are available online.  相似文献   

16.
In [1], [2], [3], [4], [5], [6], [7] and [8], it is very difficult to get reproducing kernel space of problem (1). This paper is concerned with a new algorithm for giving the analytical and approximate solutions of a class of fourth-order in the new reproducing kernel space. The numerical results are compared with both the exact solution and its n-order derived functions in the example. It is demonstrated that the new method is quite accurate and efficient for fourth-order problems.  相似文献   

17.
汪宝彬  殷红 《数学杂志》2014,34(2):281-286
本文研究了基于核方法下的在线变化损失函数的回归算法.利用迭代和比较原则,得到了算法的收敛速度,并将该结果推广到了更一般的输出空间.  相似文献   

18.
汪宝彬  殷红 《数学杂志》2014,34(2):281-286
本文研究了基于核方法下的在线变化损失函数的回归算法. 利用迭代和比较原则, 得到了算法的收敛速度, 并将该结果推广到了更一般的输出空间.  相似文献   

19.
Empirical Bayes estimation in a multiple linear regression model   总被引:6,自引:0,他引:6  
Summary Estimation of the vector β of the regression coefficients in a multiple linear regressionY=Xβ+ε is considered when β has a completely unknown and unspecified distribution and the error-vector ε has a multivariate standard normal distribution. The optimal estimator for β, which minimizes the overall mean squared error, cannot be constructed for use in practice. UsingX, Y and the information contained in the observation-vectors obtained fromn independent past experiences of the problem, (empirical Bayes) estimators for β are exhibited. These estimators are compared with the optimal estimator and are shown to be asymptotically optimal. Estimators asymptotically optimal with rates nearO(n −1) are constructed. Supported in part by a Natural Sciences and Engineering Research Council of Canada grant.  相似文献   

20.
Learning gradients is one approach for variable selection and feature covariation estimation when dealing with large data of many variables or coordinates. In a classification setting involving a convex loss function, a possible algorithm for gradient learning is implemented by solving convex quadratic programming optimization problems induced by regularization schemes in reproducing kernel Hilbert spaces. The complexity for such an algorithm might be very high when the number of variables or samples is huge. We introduce a gradient descent algorithm for gradient learning in classification. The implementation of this algorithm is simple and its convergence is elegantly studied. Explicit learning rates are presented in terms of the regularization parameter and the step size. Deep analysis for approximation by reproducing kernel Hilbert spaces under some mild conditions on the probability measure for sampling allows us to deal with a general class of convex loss functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号