首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we give several results of learning errors for linear programming support vector regression. The corresponding theorems are proved in the reproducing kernel Hilbert space. With the covering number, the approximation property and the capacity of the reproducing kernel Hilbert space are measured. The obtained result (Theorem 2.1) shows that the learning error can be controlled by the sample error and regularization error. The mentioned sample error is summarized by the errors of learning regression function and regularizing function in the reproducing kernel Hilbert space. After estimating the generalization error of learning regression function (Theorem 2.2), the upper bound (Theorem 2.3) of the regularized learning algorithm associated with linear programming support vector regression is estimated.  相似文献   

2.
Learning Rates of Least-Square Regularized Regression   总被引:1,自引:0,他引:1  
This paper considers the regularized learning algorithm associated with the least-square loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend on the approximation property and on the capacity of the reproducing kernel Hilbert space measured by covering numbers. When the kernel is C and the regression function lies in the corresponding reproducing kernel Hilbert space, the rate is mζ with ζ arbitrarily close to 1, regardless of the variance of the bounded probability distribution.  相似文献   

3.
The regularity of functions from reproducing kernel Hilbert spaces (RKHSs) is studied in the setting of learning theory. We provide a reproducing property for partial derivatives up to order s when the Mercer kernel is C2s. For such a kernel on a general domain we show that the RKHS can be embedded into the function space Cs. These observations yield a representer theorem for regularized learning algorithms involving data for function values and gradients. Examples of Hermite learning and semi-supervised learning penalized by gradients on data are considered.  相似文献   

4.
《Journal of Complexity》2005,21(3):337-349
Reproducing kernel Hilbert spaces are an important family of function spaces and play useful roles in various branches of analysis and applications including the kernel machine learning. When the domain of definition is compact, they can be characterized as the image of the square root of an integral operator, by means of the Mercer theorem. The purpose of this paper is to extend the Mercer theorem to noncompact domains, and to establish a functional analysis characterization of the reproducing kernel Hilbert spaces on general domains.  相似文献   

5.
The paper is related to the error analysis of Multicategory Support Vector Machine (MSVM) classifiers based on reproducing kernel Hilbert spaces. We choose the polynomial kernel as Mercer kernel and give the error estimate with De La Vallée Poussin means. We also introduce the standard estimation of sample error, and derive the explicit learning rate.  相似文献   

6.
根据有界差分条件,提出了学习算法的有界差分稳定框架.依据新框架,研究了机器学习阈值选择算法,再生核Hilbert空间中的正则化学习算法,Ranking学习算法和Bagging算法,证明了对应学习算法的有界差分稳定性.所获结果断言了这些算法均具有有界差分稳定性,从而为这些算法的应用奠定了理论基础.  相似文献   

7.
蔡佳  王承 《中国科学:数学》2013,43(6):613-624
本文讨论样本依赖空间中无界抽样情形下最小二乘损失函数的系数正则化问题. 这里的学习准则与之前再生核Hilbert空间的准则有着本质差异: 核除了满足连续性和有界性之外, 不需要再满足对称性和正定性; 正则化子是函数关于样本展开系数的l2-范数; 样本输出是无界的. 上述差异给误差分析增加了额外难度. 本文的目的是在样本输出不满足一致有界的情形下, 通过l2-经验覆盖数给出误差的集中估计(concentration estimates). 通过引入一个恰当的Hilbert空间以及l2-经验覆盖数的技巧, 得到了与假设空间的容量以及与回归函数的正则性有关的较满意的学习速率.  相似文献   

8.
The paper is related to the lower and upper estimates of the norm for Mercer kernel matrices. We first give a presentation of the Lagrange interpolating operators from the view of reproducing kernel space. Then, we modify the Lagrange interpolating operators to make them bounded in the space of continuous function and be of the de la Vallée Poussin type. The order of approximation by the reproducing kernel spaces for the continuous functions is thus obtained, from which the lower and upper bounds of the Rayleigh entropy and the l 2-norm for some general Mercer kernel matrices are provided. As an example, we give the l 2-norm estimate for the Mercer kernel matrix presented by the Jacobi algebraic polynomials. The discussions indicate that the l 2-norm of the Mercer kernel matrices may be estimated with discrete orthogonal transforms. Supported by the national NSF (No: 10871226) of P.R. China.  相似文献   

9.
We propose a stochastic gradient descent algorithm for learning the gradient of a regression function from random samples of function values. This is a learning algorithm involving Mercer kernels. By a detailed analysis in reproducing kernel Hilbert spaces, we provide some error bounds to show that the gradient estimated by the algorithm converges to the true gradient, under some natural conditions on the regression function and suitable choices of the step size and regularization parameters.  相似文献   

10.
《Journal of Complexity》2002,18(3):739-767
The covering number of a ball of a reproducing kernel Hilbert space as a subset of the continuous function space plays an important role in Learning Theory. We give estimates for this covering number by means of the regularity of the Mercer kernel K. For convolution type kernels K(x,t)=k(xt) on [0,1]n, we provide estimates depending on the decay of , the Fourier transform of k. In particular, when decays exponentially, our estimate for this covering number is better than all the previous results and covers many important Mercer kernels. A counter example is presented to show that the eigenfunctions of the Hilbert–Schmidt operator LmK associated with a Mercer kernel K may not be uniformly bounded. Hence some previous methods used for estimating the covering number in Learning Theory are not valid. We also provide an example of a Mercer kernel to show that LK1/2 may not be generated by a Mercer kernel.  相似文献   

11.
This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in with Borel probability measure.   相似文献   

12.
Semi-supervised learning is an emerging computational paradigm for machine learning,that aims to make better use of large amounts of inexpensive unlabeled data to improve the learning performance.While various methods have been proposed based on different intuitions,the crucial issue of generalization performance is still poorly understood.In this paper,we investigate the convergence property of the Laplacian regularized least squares regression,a semi-supervised learning algorithm based on manifold regularization.Moreover,the improvement of error bounds in terms of the number of labeled and unlabeled data is presented for the first time as far as we know.The convergence rate depends on the approximation property and the capacity of the reproducing kernel Hilbert space measured by covering numbers.Some new techniques are exploited for the analysis since an extra regularizer is introduced.  相似文献   

13.
A standard assumption in theoretical study of learning algorithms for regression is uniform boundedness of output sample values. This excludes the common case with Gaussian noise. In this paper we investigate the learning algorithm for regression generated by the least squares regularization scheme in reproducing kernel Hilbert spaces without the assumption of uniform boundedness for sampling. By imposing some incremental conditions on moments of the output variable, we derive learning rates in terms of regularity of the regression function and capacity of the hypothesis space. The novelty of our analysis is a new covering number argument for bounding the sample error.  相似文献   

14.
The regression problem in learning theory is investigated with least square Tikhonov regularization schemes in reproducing kernel Hilbert spaces (RKHS). We follow our previous work and apply the sampling operator to the error analysis in both the RKHS norm and the L2 norm. The tool for estimating the sample error is a Bennet inequality for random variables with values in Hilbert spaces. By taking the Hilbert space to be the one consisting of Hilbert-Schmidt operators in the RKHS, we improve the error bounds in the L2 metric, motivated by an idea of Caponnetto and de Vito. The error bounds we derive in the RKHS norm, together with a Tsybakov function we discuss here, yield interesting applications to the error analysis of the (binary) classification problem, since the RKHS metric controls the one for the uniform convergence.  相似文献   

15.
W_2~m空间中样条插值算子与最佳逼近算子的一致性   总被引:7,自引:0,他引:7  
张新建  黄建华 《计算数学》2001,23(4):385-392
This paper discusses generalized interpolating splines which determined by n order linear differential operators, and the best operators of interpolating approximation in W_2~m spaces, The explicit constructive method for the reproducing kernel in W_2~m space is presented, and proves the uniformity of spline interpolating operators and the best operators of interpolating approximation W_2~m space by reproducing kernel. The explicit expression of approximation error on a bounded ball in W_2~m space, and error estimation of spline operator of approximation are obtained.  相似文献   

16.
由线性微分算子确定的样条是连接多项式样条与希氏空间中抽象算子样条的重要环节,对微分算子样条的研究,既可从更高的观点揭示和概括多项式样条,又可启示我们去发现抽象算子样条的一些新的理论和应用. Green函数是研究微分算子样条的重要工具 [1],但在微分算子插值样条的计算及将样条用于数值分析中,再生核方法起着更重要的作用.文献[2][3]给出了与二阶线性微分算子插值样条有关的再生核解析表达式;由此得到了二阶微分算子插值样条与空间W_2~1[a,b]中最佳插值逼近算子的一致性;而且还利用再生核给出了Hi…  相似文献   

17.
This paper presents an error analysis for classification algorithms generated by regularization schemes with polynomial kernels. Explicit convergence rates are provided for support vector machine (SVM) soft margin classifiers. The misclassification error can be estimated by the sum of sample error and regularization error. The main difficulty for studying algorithms with polynomial kernels is the regularization error which involves deeply the degrees of the kernel polynomials. Here we overcome this difficulty by bounding the reproducing kernel Hilbert space norm of Durrmeyer operators, and estimating the rate of approximation by Durrmeyer operators in a weighted L1 space (the weight is a probability distribution). Our study shows that the regularization parameter should decrease exponentially fast with the sample size, which is a special feature of polynomial kernels. Dedicated to Charlie Micchelli on the occasion of his 60th birthday Mathematics subject classifications (2000) 68T05, 62J02. Ding-Xuan Zhou: The first author is supported partially by the Research Grants Council of Hong Kong (Project No. CityU 103704).  相似文献   

18.
This article is concerned with a method for solving nonlocal initial‐boundary value problems for parabolic and hyperbolic integro‐differential equations in reproducing kernel Hilbert space. Convergence of the proposed method is studied under some hypotheses which provide the theoretical basis of the proposed method and some error estimates for the numerical approximation in reproducing kernel Hilbert space are presented. Finally, two numerical examples are considered to illustrate the computation efficiency and accuracy of the proposed method. © 2016 The Authors Numerical Methods for Partial Differential Equations Published by Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 33: 174–198, 2017  相似文献   

19.
In this article we study reproducing kernel Hilbert spaces (RKHS) associated with translation-invariant Mercer kernels. Applying a special derivative reproducing property, we show that when the kernel is real analytic, every function from the RKHS is real analytic. This is used to investigate subspaces of the RKHS generated by a set of fundamental functions. The analyticity of functions from the RKHS enables us to derive some estimates for the covering numbers which form an essential part for the analysis of some algorithms in learning theory. The work is supported by City University of Hong Kong (Project No. 7001816), and National Science Fund for Distinguished Young Scholars of China (Project No. 10529101).  相似文献   

20.
In this paper we consider numerical integration of smooth functions lying in a particular reproducing kernel Hilbert space. We show that the worst-case error of numerical integration in this space converges at the optimal rate, up to some power of a log?N factor. A similar result is shown for the mean square worst-case error, where the bound for the latter is always better than the bound for the square worst-case error. Finally, bounds for integration errors of functions lying in the reproducing kernel Hilbert space are given. The paper concludes by illustrating the theory with numerical results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号