首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
蔡佳  王承 《中国科学:数学》2013,43(6):613-624
本文讨论样本依赖空间中无界抽样情形下最小二乘损失函数的系数正则化问题. 这里的学习准则与之前再生核Hilbert空间的准则有着本质差异: 核除了满足连续性和有界性之外, 不需要再满足对称性和正定性; 正则化子是函数关于样本展开系数的l2-范数; 样本输出是无界的. 上述差异给误差分析增加了额外难度. 本文的目的是在样本输出不满足一致有界的情形下, 通过l2-经验覆盖数给出误差的集中估计(concentration estimates). 通过引入一个恰当的Hilbert空间以及l2-经验覆盖数的技巧, 得到了与假设空间的容量以及与回归函数的正则性有关的较满意的学习速率.  相似文献   

2.
This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in with Borel probability measure.   相似文献   

3.
4.
We propose a method for support vector machine classification using indefinite kernels. Instead of directly minimizing or stabilizing a nonconvex loss function, our algorithm simultaneously computes support vectors and a proxy kernel matrix used in forming the loss. This can be interpreted as a penalized kernel learning problem where indefinite kernel matrices are treated as noisy observations of a true Mercer kernel. Our formulation keeps the problem convex and relatively large problems can be solved efficiently using the projected gradient or analytic center cutting plane methods. We compare the performance of our technique with other methods on several standard data sets.  相似文献   

5.
6.
本文讨论了再生核Hilbert 空间上一类广泛的正则化回归算法的学习率问题. 在分析算法的样本误差时, 我们利用了一种复加权的经验过程, 保证了方差与惩罚泛函同时被阈值控制, 从而避免了繁琐的迭代过程. 本文得到了比之前文献结果更为快速的学习率.  相似文献   

7.
A standard assumption in theoretical study of learning algorithms for regression is uniform boundedness of output sample values. This excludes the common case with Gaussian noise. In this paper we investigate the learning algorithm for regression generated by the least squares regularization scheme in reproducing kernel Hilbert spaces without the assumption of uniform boundedness for sampling. By imposing some incremental conditions on moments of the output variable, we derive learning rates in terms of regularity of the regression function and capacity of the hypothesis space. The novelty of our analysis is a new covering number argument for bounding the sample error.  相似文献   

8.
New inequalities of singular values of the integral operators with smoothL 2 kernels are obtained and shown by examples to be sharp if the kernels satisfy also certain boundary conditions. These results are based on an idea of Gohberg-Krein by which the singular values of the integral operators are interrelated to the eigenvalues of some two point boundary value problems.Dedicate to Professor Ky Fan on the occasion of his 85th birthday  相似文献   

9.
Based on the simplicity and calculability of polyline function, we consider, in this paper, the regularized regression learning algorithm associated with the least square loss and the set of polyline function . The target is the error analysis for the regression problem. The approach presented in the paper yields satisfactory learning rates. The rates depend on the approximation property of and on the capacity of measured by covering numbers. Under some certain conditions, the rates achieve m?4/5 log m. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
In this article we obtain weighted norm estimates for multilinear singular integrals with non-smooth kernels and the boundedness of certain multilinear commutators by making use of a sharp maximal function.  相似文献   

11.
We derive quantitative bounds for eigenvalues of complex perturbations of the indefinite Laplacian on the real line. Our results substantially improve existing results even for real potentials. For L1-potentials, we obtain optimal spectral enclosures which accommodate also embedded eigenvalues, while our result for Lp-potentials yield sharp spectral bounds on the imaginary parts of eigenvalues of the perturbed operator for all p[1,). The sharpness of the results are demonstrated by means of explicit examples.  相似文献   

12.
Sharp function inequalities for several vector-valued, multilinear singular integral operators with non-smooth kernels are obtained. As an application, some weighted L p (p > 1) norm inequalities for vector-valued multilinear operators are derived.  相似文献   

13.
The classical support vector machines regression (SVMR) is known as a regularized learning algorithm in reproducing kernel Hilbert spaces (RKHS) with a ε-insensitive loss function and an RKHS norm regularizer. In this paper, we study a new SVMR algorithm where the regularization term is proportional to l1-norm of the coefficients in the kernel ensembles. We provide an error analysis of this algorithm, an explicit learning rate is then derived under some assumptions.  相似文献   

14.
15.
We determine the ranks of the permutation representations of the simple groups B l (q), C l (q), and D l (q) on the cosets of the parabolic maximal subgroups.  相似文献   

16.
17.
We propose a variant of two SVM regression algorithms expressly tailored in order to exploit additional information summarizing the relevance of each data item, as a measure of its relative importance w.r.t. the remaining examples. These variants, enclosing the original formulations when all data items have the same relevance, are preliminary tested on synthetic and real-world data sets. The obtained results outperform standard SVM approaches to regression if evaluated in light of the above mentioned additional information about data quality.  相似文献   

18.
We show how certain widely used multistep approximation algorithms can be interpreted as instances of an approximate Newton method. It was shown in an earlier paper by the second author that the convergence rates of approximate Newton methods (in the context of the numerical solution of PDEs) suffer from a “loss of derivatives”, and that the subsequent linear rate of convergence can be improved to be superlinear using an adaptation of Nash–Moser iteration for numerical analysis purposes; the essence of the adaptation being a splitting of the inversion and the smoothing into two separate steps. We show how these ideas apply to scattered data approximation as well as the numerical solution of partial differential equations. We investigate the use of several radial kernels for the smoothing operation. In our numerical examples we use radial basis functions also in the inversion step. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

19.
This paper investigates the approximation of multivariate functions from data via linear combinations of translates of a positive definite kernel from a reproducing kernel Hilbert space. If standard interpolation conditions are relaxed by Chebyshev-type constraints, one can minimize the norm of the approximant in the Hilbert space under these constraints. By standard arguments of optimization theory, the solutions will take a simple form, based on the data related to the active constraints, called support vectors in the context of machine learning. The corresponding quadratic programming problems are investigated to some extent. Using monotonicity results concerning the Hilbert space norm, iterative techniques based on small quadratic subproblems on active sets are shown to be finite, even if they drop part of their previous information and even if they are used for infinite data, e.g., in the context of online learning. Numerical experiments confirm the theoretical results. Dedicated to C.A. Micchelli at the occasion of his 60th birthday Mathematics subject classifications (2000) 65D05, 65D10, 41A15, 41A17, 41A27, 41A30, 41A40, 41A63.  相似文献   

20.
Let X and Y be Banach spaces,0 < q < +∞,1 < p < +∞.In this paper,we characterize matrix transformations of lq(X) to lp(Y).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号