首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a theoretical framework for reproducing kernel-based reconstruction methods in certain generalized Besov spaces based on positive, essentially self-adjoint operators. An explicit representation of the reproducing kernel is given in terms of an infinite series. We provide stability estimates for the kernel, including inverse Bernstein-type estimates for kernel-based trial spaces, and we give condition estimates for the interpolation matrix. Then, a deterministic error analysis for regularized reconstruction schemes is presented by means of sampling inequalities. In particular, we provide error bounds for a regularized reconstruction scheme based on a numerically feasible approximation of the kernel. This allows us to derive explicit coupling relations between the series truncation, the regularization parameters and the data set.  相似文献   

2.
In this paper, we study the consistency of the regularized least-square regression in a general reproducing kernel Hilbert space. We characterize the compactness of the inclusion map from a reproducing kernel Hilbert space to the space of continuous functions and show that the capacity-based analysis by uniform covering numbers may fail in a very general setting. We prove the consistency and compute the learning rate by means of integral operator techniques. To this end, we study the properties of the integral operator. The analysis reveals that the essence of this approach is the isomorphism of the square root operator.  相似文献   

3.
Bayesian l0‐regularized least squares is a variable selection technique for high‐dimensional predictors. The challenge is optimizing a nonconvex objective function via search over model space consisting of all possible predictor combinations. Spike‐and‐slab (aka Bernoulli‐Gaussian) priors are the gold standard for Bayesian variable selection, with a caveat of computational speed and scalability. Single best replacement (SBR) provides a fast scalable alternative. We provide a link between Bayesian regularization and proximal updating, which provides an equivalence between finding a posterior mode and a posterior mean with a different regularization prior. This allows us to use SBR to find the spike‐and‐slab estimator. To illustrate our methodology, we provide simulation evidence and a real data example on the statistical properties and computational efficiency of SBR versus direct posterior sampling using spike‐and‐slab priors. Finally, we conclude with directions for future research.  相似文献   

4.
This paper addresses the learning algorithm on the unit sphere. The main purpose is to present an error analysis for regression generated by regularized least square algorithms with spherical harmonics kernel. The excess error can be estimated by the sum of sample errors and regularization errors. Our study shows that by introducing a suitable spherical harmonics kernel, the regularization parameter can decrease arbitrarily fast with the sample size.  相似文献   

5.
A note on application of integral operator in learning theory   总被引:1,自引:0,他引:1  
By the aid of the properties of the square root of positive operators we refine the consistency analysis of regularized least square regression in a reproducing kernel Hilbert space. Sharper error bounds and faster learning rates are obtained when the sampling sequence satisfies a strongly mixing condition.  相似文献   

6.
本文讨论了再生核Hilbert 空间上一类广泛的正则化回归算法的学习率问题. 在分析算法的样本误差时, 我们利用了一种复加权的经验过程, 保证了方差与惩罚泛函同时被阈值控制, 从而避免了繁琐的迭代过程. 本文得到了比之前文献结果更为快速的学习率.  相似文献   

7.
In the present paper, we give an investigation on the learning rate ofl2-coefcient regularized classifcation with strong loss and the data dependent kernel functional spaces. The results show that the learning rate is influenced by the strong convexity.  相似文献   

8.
In this paper we present analogues of the maximum principle and of some parabolic inequalities for the regularized time-dependent Schrödinger operator on open manifolds using Günter derivatives. Moreover, we study the uniqueness of bounded solutions for the regularized Schrödinger–Günter problem and obtain the corresponding fundamental solution. Furthermore, we present a regularized Schrödinger kernel and prove some convergence results. Finally, we present an explicit construction for the fundamental solution to the Schrödinger–Günter problem on a class of conformally flat cylinders and tori.  相似文献   

9.
In this paper, we investigate the generalization performance of a regularized ranking algorithm in a reproducing kernel Hilbert space associated with least square ranking loss. An explicit expression for the solution via a sampling operator is derived and plays an important role in our analysis. Convergence analysis for learning a ranking function is provided, based on a novel capacity independent approach, which is stronger than for previous studies of the ranking problem.  相似文献   

10.
In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space. We investigate a class of spectral/regularized algorithms, including ridge regression, principal component regression, and gradient methods. We prove optimal, high-probability convergence results in terms of variants of norms for the studied algorithms, considering a capacity assumption on the hypothesis space and a general source condition on the target function. Consequently, we obtain almost sure convergence results with optimal rates. Our results improve and generalize previous results, filling a theoretical gap for the non-attainable cases.  相似文献   

11.
Fredholm integral equations with the right-hand side having singularities at the endpoints are considered. The singularities are moved into the kernel that is subsequently regularized by a suitable one-to-one map. The Nyström method is applied to the regularized equation. The convergence, stability and well conditioning of the method is proved in spaces of weighted continuous functions. The special case of the weakly singular and symmetric kernel is also investigated. Several numerical tests are included.  相似文献   

12.
In regularized kernel methods, the solution of a learning problem is found by minimizing a functional consisting of a empirical risk and a regularization term. In this paper, we study the existence of optimal solution of multi-kernel regularization learning. First, we ameliorate a previous conclusion about this problem given by Micchelli and Pontil, and prove that the optimal solution exists whenever the kernel set is a compact set. Second, we consider this problem for Gaussian kernels with variance σ∈(0,∞), and give some conditions under which the optimal solution exists.  相似文献   

13.
We view regularized learning of a function in a Banach space from its finite samples as an optimization problem. Within the framework of reproducing kernel Banach spaces, we prove the representer theorem for the minimizer of regularized learning schemes with a general loss function and a nondecreasing regularizer. When the loss function and the regularizer are differentiable, a characterization equation for the minimizer is also established.  相似文献   

14.
In the present paper, we give an investigation on the learning rate of l2-coefficient regularized classification with strong loss and the data dependent kernel functional spaces. The results show that the learning rate is influenced by the strong convexity.  相似文献   

15.
In the present paper,we provide an error bound for the learning rates of the regularized Shannon sampling learning scheme when the hypothesis space is a reproducing kernel Hilbert space(RKHS) derived by a Mercer kernel and a determined net.We show that if the sample is taken according to the determined set,then,the sample error can be bounded by the Mercer matrix with respect to the samples and the determined net.The regularization error may be bounded by the approximation order of the reproducing kernel Hilbert space interpolation operator.The paper is an investigation on a remark provided by Smale and Zhou.  相似文献   

16.
蔡佳  王承 《中国科学:数学》2013,43(6):613-624
本文讨论样本依赖空间中无界抽样情形下最小二乘损失函数的系数正则化问题. 这里的学习准则与之前再生核Hilbert空间的准则有着本质差异: 核除了满足连续性和有界性之外, 不需要再满足对称性和正定性; 正则化子是函数关于样本展开系数的l2-范数; 样本输出是无界的. 上述差异给误差分析增加了额外难度. 本文的目的是在样本输出不满足一致有界的情形下, 通过l2-经验覆盖数给出误差的集中估计(concentration estimates). 通过引入一个恰当的Hilbert空间以及l2-经验覆盖数的技巧, 得到了与假设空间的容量以及与回归函数的正则性有关的较满意的学习速率.  相似文献   

17.
In this paper we present a redesign of a linear algebra kernel of an interior point method to avoid the explicit use of problem matrices. The only access to the original problem data needed are the matrix-vector multiplications with the Hessian and Jacobian matrices. Such a redesign requires the use of suitably preconditioned iterative methods and imposes restrictions on the way the preconditioner is computed. A two-step approach is used to design a preconditioner. First, the Newton equation system is regularized to guarantee better numerical properties and then it is preconditioned. The preconditioner is implicit, that is, its computation requires only matrix-vector multiplications with the original problem data. The method is therefore well-suited to problems in which matrices are not explicitly available and/or are too large to be stored in computer memory. Numerical properties of the approach are studied including the analysis of the conditioning of the regularized system and that of the preconditioned regularized system. The method has been implemented and preliminary computational results for small problems limited to 1 million of variables and 10 million of nonzero elements demonstrate the feasibility of the approach.  相似文献   

18.
We consider a system of singularly perturbed Fredholm integro-differential equations with rapidly varying kernel and develop an algorithm for constructing regularized asymptotic solutions. It is shown that, in the presence of a rapidly decaying factor multiplying the kernel, the original problem is not on the spectrum (i.e., is solvable for any right-hand side). To prove this, we obtain and use a representation of the resolvent (for sufficiently small ? > 0) in the form of a function series that is uniformly convergent in the ordinary sense.  相似文献   

19.
Learning Rates of Least-Square Regularized Regression   总被引:1,自引:0,他引:1  
This paper considers the regularized learning algorithm associated with the least-square loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend on the approximation property and on the capacity of the reproducing kernel Hilbert space measured by covering numbers. When the kernel is C and the regression function lies in the corresponding reproducing kernel Hilbert space, the rate is mζ with ζ arbitrarily close to 1, regardless of the variance of the bounded probability distribution.  相似文献   

20.
The regularity of functions from reproducing kernel Hilbert spaces (RKHSs) is studied in the setting of learning theory. We provide a reproducing property for partial derivatives up to order s when the Mercer kernel is C2s. For such a kernel on a general domain we show that the RKHS can be embedded into the function space Cs. These observations yield a representer theorem for regularized learning algorithms involving data for function values and gradients. Examples of Hermite learning and semi-supervised learning penalized by gradients on data are considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号