首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Motivated by multi-task machine learning with Banach spaces, we propose the notion of vector-valued reproducing kernel Banach spaces (RKBSs). Basic properties of the spaces and the associated reproducing kernels are investigated. We also present feature map constructions and several concrete examples of vector-valued RKBSs. The theory is then applied to multi-task machine learning. Especially, the representer theorem and characterization equations for the minimizer of regularized learning schemes in vector-valued RKBSs are established.  相似文献   

2.
We propose a definition of generalized semi-inner products (g.s.i.p.). By relating them to duality mappings from a normed vector space to its dual space, a characterization for all g.s.i.p. satisfying this definition is obtained. We then study the Riesz representation of continuous linear functionals via g.s.i.p. As applications, we establish a representer theorem and characterization equation for the minimizer of a regularized learning from finite or infinite samples in Banach spaces of functions.  相似文献   

3.
Least-squares regularized learning algorithms for regression were well-studied in the literature when the sampling process is independent and the regularization term is the square of the norm in a reproducing kernel Hilbert space (RKHS). Some analysis has also been done for dependent sampling processes or regularizers being the qth power of the function norm (q-penalty) with 0?q?≤?2. The purpose of this article is to conduct error analysis of the least-squares regularized regression algorithm when the sampling sequence is weakly dependent satisfying an exponentially decaying α-mixing condition and when the regularizer takes the q-penalty with 0?q?≤?2. We use a covering number argument and derive learning rates in terms of the α-mixing decay, an approximation condition and the capacity of balls of the RKHS.  相似文献   

4.
The classical support vector machines regression (SVMR) is known as a regularized learning algorithm in reproducing kernel Hilbert spaces (RKHS) with a ε-insensitive loss function and an RKHS norm regularizer. In this paper, we study a new SVMR algorithm where the regularization term is proportional to l1-norm of the coefficients in the kernel ensembles. We provide an error analysis of this algorithm, an explicit learning rate is then derived under some assumptions.  相似文献   

5.
蔡佳  王承 《中国科学:数学》2013,43(6):613-624
本文讨论样本依赖空间中无界抽样情形下最小二乘损失函数的系数正则化问题. 这里的学习准则与之前再生核Hilbert空间的准则有着本质差异: 核除了满足连续性和有界性之外, 不需要再满足对称性和正定性; 正则化子是函数关于样本展开系数的l2-范数; 样本输出是无界的. 上述差异给误差分析增加了额外难度. 本文的目的是在样本输出不满足一致有界的情形下, 通过l2-经验覆盖数给出误差的集中估计(concentration estimates). 通过引入一个恰当的Hilbert空间以及l2-经验覆盖数的技巧, 得到了与假设空间的容量以及与回归函数的正则性有关的较满意的学习速率.  相似文献   

6.
Regularized empirical risk minimization including support vector machines plays an important role in machine learning theory. In this paper regularized pairwise learning (RPL) methods based on kernels will be investigated. One example is regularized minimization of the error entropy loss which has recently attracted quite some interest from the viewpoint of consistency and learning rates. This paper shows that such RPL methods and also their empirical bootstrap have additionally good statistical robustness properties, if the loss function and the kernel are chosen appropriately. We treat two cases of particular interest: (i) a bounded and non-convex loss function and (ii) an unbounded convex loss function satisfying a certain Lipschitz type condition.  相似文献   

7.
考虑一类随机互线性补问题的求解方法,目的是通过定义NCP函数来使正则化期望残差最小化.通过拟蒙洛包洛方法产生一系列观察值并且证得离散近似问题最小值解的聚点就是相应随机线性互补问题的期望残差最小值ERM,同时得到利用ERM到解为有界的充分条件.进一步证明ERM法能够得到具有稳定性和最小灵敏度的稳健解.  相似文献   

8.
Semi-supervised learning is an emerging computational paradigm for machine learning,that aims to make better use of large amounts of inexpensive unlabeled data to improve the learning performance.While various methods have been proposed based on different intuitions,the crucial issue of generalization performance is still poorly understood.In this paper,we investigate the convergence property of the Laplacian regularized least squares regression,a semi-supervised learning algorithm based on manifold regularization.Moreover,the improvement of error bounds in terms of the number of labeled and unlabeled data is presented for the first time as far as we know.The convergence rate depends on the approximation property and the capacity of the reproducing kernel Hilbert space measured by covering numbers.Some new techniques are exploited for the analysis since an extra regularizer is introduced.  相似文献   

9.
We introduce a regularized equilibrium problem in Banach spaces, involving generalized Bregman functions. For this regularized problem, we establish the existence and uniqueness of solutions. These regularizations yield a proximal-like method for solving equilibrium problems in Banach spaces. We prove that the proximal sequence is an asymptotically solving sequence when the dual space is uniformly convex. Moreover, we prove that all weak accumulation points are solutions if the equilibrium function is lower semicontinuous in its first variable. We prove, under additional assumptions, that the proximal sequence converges weakly to a solution.  相似文献   

10.
11.
We study elastostatic boundary value problems with a conical boundary point by the method of integral equations. The equations of such problems are singular. In the case of a smooth surface, we construct a regularizer for these equations; in the case of a surface with a conical point, the regularizer is constructed in such a way as to ensure that the kernel of the regularized equation belongs to the class B and satisfies the assumptions of the Fredholm alternative theorem. We analyze the properties of elastic potentials in the case of a surface with a conical point.  相似文献   

12.
In this paper, we study a strong convergence for monotone operators. We first introduce the hybrid type algorithm for monotone operators. Next, we obtain a strong convergence theorem (Theorem 3.3) for finding a zero point of an inverse-strongly monotone operator in a Banach space. Finally, we apply our convergence theorem to the problem of finding a minimizer of a convex function.  相似文献   

13.
This paper concerns with the study of a differential variational–hemivariational inequality (DVHVI, for short) in infinite-dimensional Banach spaces. We first introduce the new concept of gap functions for the variational control system of (DVHVI). Then, we consider two kinds of gap functions which are regularized gap function and Moreau–Yosida regularized gap function, respectively, and examine the relevant properties of the gap functions. Moreover, two global error bounds which depend implicitly on the regularized gap function and the Moreau–Yosida regularized gap function, accordingly, are obtained. Finally, in order to illustrate the applicability of the theoretical results, we investigate a coupled dynamic system which is formulated by a nonlinear reaction–diffusion equation described by a time-dependent nonsmooth semipermeability problem.  相似文献   

14.
Our purpose in this paper is to approximate solutions of accretive operators in Banach spaces. Motivated by Halpern's iteration and Mann's iteration, we prove weak and strong convergence theorems for resolvents of accretive operators. Using these results, we consider the convex minimization problem of finding a minimizer of a proper lower semicontinuous convex function and the variational problem of finding a solution of a variational inequality.  相似文献   

15.
Gaussians are important tools for learning from data of large dimensions. The variance of a Gaussian kernel is a measurement of the frequency range of function components or features retrieved by learning algorithms induced by the Gaussian. The learning ability and approximation power increase when the variance of the Gaussian decreases. Thus, it is natural to use  Gaussians with decreasing variances  for online algorithms when samples are imposed one by one. In this paper, we consider fully online classification algorithms associated with a general loss function and varying Gaussians which are closely related to regularization schemes in reproducing kernel Hilbert spaces. Learning rates are derived in terms of the smoothness of a target function associated with the probability measure controlling sampling and the loss function. A critical estimate is given for the norm of the difference of regularized target functions as the variance of the Gaussian changes. Concrete learning rates are presented for the online learning algorithm with the least square loss function.  相似文献   

16.
The noise contained in data measured by imaging instruments is often primarily of Poisson type. This motivates, in many cases, the use of the Poisson negative-log likelihood function in place of the ubiquitous least squares data fidelity when solving image deblurring problems. We assume that the underlying blurring operator is compact, so that, as in the least squares case, the resulting minimization problem is ill-posed and must be regularized. In this paper, we focus on total variation regularization and show that the problem of computing the minimizer of the resulting total variation-penalized Poisson likelihood functional is well-posed. We then prove that, as the errors in the data and in the blurring operator tend to zero, the resulting minimizers converge to the minimizer of the exact likelihood function. Finally, the practical effectiveness of the approach is demonstrated on synthetically generated data, and a nonnegatively constrained, projected quasi-Newton method is introduced.  相似文献   

17.
In this paper, we investigate the generalization performance of a regularized ranking algorithm in a reproducing kernel Hilbert space associated with least square ranking loss. An explicit expression for the solution via a sampling operator is derived and plays an important role in our analysis. Convergence analysis for learning a ranking function is provided, based on a novel capacity independent approach, which is stronger than for previous studies of the ranking problem.  相似文献   

18.
基于C正则预解算子族和双连续C_0半群引入了双连续C正则预解算子族的概念,考察了双连续C正则预解算子族生成元与预解式之间的关系,给出了双连续C正则预解算子族Hille-Yosida型生成定理,从而对Bananch空间强连续半群的生成定理进行了推广.  相似文献   

19.
This paper considers online classification learning algorithms for regularized classification schemes with generalized gradient. A novel capacity independent approach is presented. It verifies the strong convergence of sizes and yields satisfactory convergence rates for polynomially decaying step sizes. Compared with the gradient schemes, this algorithm needs only less additional assumptions on the loss function and derives a stronger result with respect to the choice of step sizes and the regularization pa...  相似文献   

20.
Analysis of Support Vector Machines Regression   总被引:1,自引:0,他引:1  
Support vector machines regression (SVMR) is a regularized learning algorithm in reproducing kernel Hilbert spaces with a loss function called the ε-insensitive loss function. Compared with the well-understood least square regression, the study of SVMR is not satisfactory, especially the quantitative estimates of the convergence of this algorithm. This paper provides an error analysis for SVMR, and introduces some recently developed methods for analysis of classification algorithms such as the projection operator and the iteration technique. The main result is an explicit learning rate for the SVMR algorithm under some assumptions. Research supported by NNSF of China No. 10471002, No. 10571010 and RFDP of China No. 20060001010.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号