首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
蔡佳  王承 《中国科学:数学》2013,43(6):613-624
本文讨论样本依赖空间中无界抽样情形下最小二乘损失函数的系数正则化问题. 这里的学习准则与之前再生核Hilbert空间的准则有着本质差异: 核除了满足连续性和有界性之外, 不需要再满足对称性和正定性; 正则化子是函数关于样本展开系数的l2-范数; 样本输出是无界的. 上述差异给误差分析增加了额外难度. 本文的目的是在样本输出不满足一致有界的情形下, 通过l2-经验覆盖数给出误差的集中估计(concentration estimates). 通过引入一个恰当的Hilbert空间以及l2-经验覆盖数的技巧, 得到了与假设空间的容量以及与回归函数的正则性有关的较满意的学习速率.  相似文献   

2.
The least-square regression problem is considered by regularization schemes in reproducing kernel Hilbert spaces. The learning algorithm is implemented with samples drawn from unbounded sampling processes. The purpose of this paper is to present concentration estimates for the error based on ?2-empirical covering numbers, which improves learning rates in the literature.  相似文献   

3.
A standard assumption in theoretical study of learning algorithms for regression is uniform boundedness of output sample values. This excludes the common case with Gaussian noise. In this paper we investigate the learning algorithm for regression generated by the least squares regularization scheme in reproducing kernel Hilbert spaces without the assumption of uniform boundedness for sampling. By imposing some incremental conditions on moments of the output variable, we derive learning rates in terms of regularity of the regression function and capacity of the hypothesis space. The novelty of our analysis is a new covering number argument for bounding the sample error.  相似文献   

4.
Analysis of Support Vector Machines Regression   总被引:1,自引:0,他引:1  
Support vector machines regression (SVMR) is a regularized learning algorithm in reproducing kernel Hilbert spaces with a loss function called the ε-insensitive loss function. Compared with the well-understood least square regression, the study of SVMR is not satisfactory, especially the quantitative estimates of the convergence of this algorithm. This paper provides an error analysis for SVMR, and introduces some recently developed methods for analysis of classification algorithms such as the projection operator and the iteration technique. The main result is an explicit learning rate for the SVMR algorithm under some assumptions. Research supported by NNSF of China No. 10471002, No. 10571010 and RFDP of China No. 20060001010.  相似文献   

5.
This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in with Borel probability measure.   相似文献   

6.
In this paper, we are interested in the analysis of regularized online algorithms associated with reproducing kernel Hilbert spaces. General conditions on the loss function and step sizes are given to ensure convergence. Explicit learning rates are also given for particular step sizes. The author’s current address: Department of Computer Sciences, University College London, Gower Street, London WC1E, England, UK.  相似文献   

7.
In this paper, we give several results of learning errors for linear programming support vector regression. The corresponding theorems are proved in the reproducing kernel Hilbert space. With the covering number, the approximation property and the capacity of the reproducing kernel Hilbert space are measured. The obtained result (Theorem 2.1) shows that the learning error can be controlled by the sample error and regularization error. The mentioned sample error is summarized by the errors of learning regression function and regularizing function in the reproducing kernel Hilbert space. After estimating the generalization error of learning regression function (Theorem 2.2), the upper bound (Theorem 2.3) of the regularized learning algorithm associated with linear programming support vector regression is estimated.  相似文献   

8.
In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space. We investigate a class of spectral/regularized algorithms, including ridge regression, principal component regression, and gradient methods. We prove optimal, high-probability convergence results in terms of variants of norms for the studied algorithms, considering a capacity assumption on the hypothesis space and a general source condition on the target function. Consequently, we obtain almost sure convergence results with optimal rates. Our results improve and generalize previous results, filling a theoretical gap for the non-attainable cases.  相似文献   

9.
The regularity of functions from reproducing kernel Hilbert spaces (RKHSs) is studied in the setting of learning theory. We provide a reproducing property for partial derivatives up to order s when the Mercer kernel is C2s. For such a kernel on a general domain we show that the RKHS can be embedded into the function space Cs. These observations yield a representer theorem for regularized learning algorithms involving data for function values and gradients. Examples of Hermite learning and semi-supervised learning penalized by gradients on data are considered.  相似文献   

10.
The regression problem in learning theory is investigated with least square Tikhonov regularization schemes in reproducing kernel Hilbert spaces (RKHS). We follow our previous work and apply the sampling operator to the error analysis in both the RKHS norm and the L2 norm. The tool for estimating the sample error is a Bennet inequality for random variables with values in Hilbert spaces. By taking the Hilbert space to be the one consisting of Hilbert-Schmidt operators in the RKHS, we improve the error bounds in the L2 metric, motivated by an idea of Caponnetto and de Vito. The error bounds we derive in the RKHS norm, together with a Tsybakov function we discuss here, yield interesting applications to the error analysis of the (binary) classification problem, since the RKHS metric controls the one for the uniform convergence.  相似文献   

11.
In the present paper,we provide an error bound for the learning rates of the regularized Shannon sampling learning scheme when the hypothesis space is a reproducing kernel Hilbert space(RKHS) derived by a Mercer kernel and a determined net.We show that if the sample is taken according to the determined set,then,the sample error can be bounded by the Mercer matrix with respect to the samples and the determined net.The regularization error may be bounded by the approximation order of the reproducing kernel Hilbert space interpolation operator.The paper is an investigation on a remark provided by Smale and Zhou.  相似文献   

12.
We study univariate integration with the Gaussian weight for a positive variance α. This is done for the reproducing kernel Hilbert space with the Gaussian kernel for a positive shape parameter γ. We study Gauss-Hermite quadratures, although this choice of quadratures may be questionable since polynomials do not belong to this space of functions. Nevertheless, we provide the explicit formula for the error of the Gauss-Hermite quadrature using n function values. In particular, for 2αγ 2<1 we have an exponential rate of convergence, and for 2αγ 2=1 we have no convergence, whereas for 2αγ 2>1 we have an exponential divergence.  相似文献   

13.
We introduce a vector differential operator P and a vector boundary operator B to derive a reproducing kernel along with its associated Hilbert space which is shown to be embedded in a classical Sobolev space. This reproducing kernel is a Green kernel of differential operator L:?=?P ???T P with homogeneous or nonhomogeneous boundary conditions given by B, where we ensure that the distributional adjoint operator P ??? of P is well-defined in the distributional sense. We represent the inner product of the reproducing-kernel Hilbert space in terms of the operators P and B. In addition, we find relationships for the eigenfunctions and eigenvalues of the reproducing kernel and the operators with homogeneous or nonhomogeneous boundary conditions. These eigenfunctions and eigenvalues are used to compute a series expansion of the reproducing kernel and an orthonormal basis of the reproducing-kernel Hilbert space. Our theoretical results provide perhaps a more intuitive way of understanding what kind of functions are well approximated by the reproducing kernel-based interpolant to a given multivariate data sample.  相似文献   

14.
A note on application of integral operator in learning theory   总被引:1,自引:0,他引:1  
By the aid of the properties of the square root of positive operators we refine the consistency analysis of regularized least square regression in a reproducing kernel Hilbert space. Sharper error bounds and faster learning rates are obtained when the sampling sequence satisfies a strongly mixing condition.  相似文献   

15.
16.
Sufficient conditions are established in order that, for a fixed infinite set of sampling points on the full line, a function satisfies a sampling theorem on a suitable closed subspace of a unitarily translation invariant reproducing kernel Hilbert space. A number of examples of such reproducing kernel Hilbert spaces and the corresponding sampling expansions are given. Sampling theorems for functions on the half-line are also established in RKHS using Riesz bases in subspaces of L 2(R +).  相似文献   

17.
Least-squares regularized learning algorithms for regression were well-studied in the literature when the sampling process is independent and the regularization term is the square of the norm in a reproducing kernel Hilbert space (RKHS). Some analysis has also been done for dependent sampling processes or regularizers being the qth power of the function norm (q-penalty) with 0?q?≤?2. The purpose of this article is to conduct error analysis of the least-squares regularized regression algorithm when the sampling sequence is weakly dependent satisfying an exponentially decaying α-mixing condition and when the regularizer takes the q-penalty with 0?q?≤?2. We use a covering number argument and derive learning rates in terms of the α-mixing decay, an approximation condition and the capacity of balls of the RKHS.  相似文献   

18.
In this paper, we study the consistency of the regularized least-square regression in a general reproducing kernel Hilbert space. We characterize the compactness of the inclusion map from a reproducing kernel Hilbert space to the space of continuous functions and show that the capacity-based analysis by uniform covering numbers may fail in a very general setting. We prove the consistency and compute the learning rate by means of integral operator techniques. To this end, we study the properties of the integral operator. The analysis reveals that the essence of this approach is the isomorphism of the square root operator.  相似文献   

19.
Pick's theorem tells us that there exists a function inH , which is bounded by 1 and takes given values at given points, if and only if a certain matrix is positive.H is the space of multipliers ofH 2, and this theorem has a natural generalisation whenH is replaced by the space of multipliers of a general reproducing kernel Hilbert spaceH(K) (whereK is the reproducing kernel). J. Agler has shown that this generalised theorem is true whenH(K) is a certain Sobolev space or the Dirichlet space, so it is natural to ask for which reproducing kernel Hilbert spaces this generalised theorem is true. This paper widens Agler's approach to cover reproducing kernel Hilbert spaces in general, replacing Agler's use of the deep theory of co-analytic models by a relatively elementary, and more general, matrix argument. The resulting theorem gives sufficient (and usable) conditions on the kernelK, for the generalised Pick's theorem to be true forH(K), and these are then used to prove Pick's theorem for certain weighted Hardy and Sobolev spaces and for a functional Hilbert space introduced by Saitoh.  相似文献   

20.
We study the worst case setting for approximation of d variate functions from a general reproducing kernel Hilbert space with the error measured in the L norm. We mainly consider algorithms that use n arbitrary continuous linear functionals. We look for algorithms with the minimal worst case errors and for their rates of convergence as n goes to infinity. Algorithms using n function values will be analyzed in a forthcoming paper.We show that the L approximation problem in the worst case setting is related to the weighted L2 approximation problem in the average case setting with respect to a zero-mean Gaussian stochastic process whose covariance function is the same as the reproducing kernel of the Hilbert space. This relation enables us to find optimal algorithms and their rates of convergence for the weighted Korobov space with an arbitrary smoothness parameter α>1, and for the weighted Sobolev space whose reproducing kernel corresponds to the Wiener sheet measure. The optimal convergence rates are n-(α-1)/2 and n-1/2, respectively.We also study tractability of L approximation for the absolute and normalized error criteria, i.e., how the minimal worst case errors depend on the number of variables, d, especially when d is arbitrarily large. We provide necessary and sufficient conditions on tractability of L approximation in terms of tractability conditions of the weighted L2 approximation in the average case setting. In particular, tractability holds in weighted Korobov and Sobolev spaces only for weights tending sufficiently fast to zero and does not hold for the classical unweighted spaces.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号