共查询到20条相似文献,搜索用时 15 毫秒
1.
We study differentiability of functions in the reproducing kernel Hilbert space (RKHS) associated with a smooth Mercer-like kernel on the sphere. We show that differentiability up to a certain order of the kernel yields both, differentiability up to the same order of the elements in the series representation of the kernel and a series representation for the corresponding derivatives of the kernel. These facts are used to embed the RKHS into spaces of differentiable functions and to deduce reproducing properties for the derivatives of functions in the RKHS. We discuss compactness and boundedness of the embedding and some applications to Gaussian-like kernels. 相似文献
2.
Learning Rates of Least-Square Regularized Regression 总被引:1,自引:0,他引:1
This paper considers the regularized learning algorithm associated
with the least-square loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem
in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend
on the approximation property and on the capacity of the reproducing kernel Hilbert space measured by covering numbers. When
the kernel is C∞ and the regression function lies in the corresponding reproducing kernel Hilbert space, the rate is mζ with ζ arbitrarily close to 1, regardless of the variance of the bounded probability distribution. 相似文献
3.
We establish some perturbed minimization principles, and we develop a theory of subdifferential calculus, for functions defined on Riemannian manifolds. Then we apply these results to show existence and uniqueness of viscosity solutions to Hamilton–Jacobi equations defined on Riemannian manifolds. 相似文献
4.
Orizon P. Ferreira 《Journal of Mathematical Analysis and Applications》2006,313(2):587-597
A characterization of Lipschitz behavior of functions defined on Riemannian manifolds is given in this paper. First, it is extended the concept of proximal subgradient and some results of proximal analysis from Hilbert space to Riemannian manifold setting. A technique introduced by Clarke, Stern and Wolenski [F.H. Clarke, R.J. Stern, P.R. Wolenski, Subgradient criteria for monotonicity, the Lipschitz condition, and convexity, Canad. J. Math. 45 (1993) 1167-1183], for generating proximal subgradients of functions defined on a Hilbert spaces, is also extended to Riemannian manifolds in order to provide that characterization. A number of examples of Lipschitz functions are presented so as to show that the Lipschitz behavior of functions defined on Riemannian manifolds depends on the Riemannian metric. 相似文献
5.
In this paper we study the Riesz transform on complete and connected Riemannian manifolds M with a certain spectral gap in the L2 spectrum of the Laplacian. We show that on such manifolds the Riesz transform is Lp bounded for all p∈(1,∞). This generalizes a result by Mandouvalos and Marias and extends a result by Auscher, Coulhon, Duong, and Hofmann to the case where zero is an isolated point of the L2 spectrum of the Laplacian. 相似文献
6.
A. Barani 《Journal of Mathematical Analysis and Applications》2007,328(2):767-779
The concept of a geodesic invex subset of a Riemannian manifold is introduced. Geodesic invex and preinvex functions on a geodesic invex set with respect to particular maps are defined. The relation between geodesic invexity and preinvexity of functions on manifolds is studied. Using proximal subdifferential, certain results concerning extremum points of a non smooth geodesic preinvex function on a geodesic invex set are obtained. The main value inequality and the mean value theorem in invexity analysis are extended to Cartan-Hadamard manifolds. 相似文献
7.
Learning gradients is one approach for variable selection and feature covariation estimation when dealing with large data of many variables or coordinates. In a classification setting involving a convex loss function, a possible algorithm for gradient learning is implemented by solving convex quadratic programming optimization problems induced by regularization schemes in reproducing kernel Hilbert spaces. The complexity for such an algorithm might be very high when the number of variables or samples is huge. We introduce a gradient descent algorithm for gradient learning in classification. The implementation of this algorithm is simple and its convergence is elegantly studied. Explicit learning rates are presented in terms of the regularization parameter and the step size. Deep analysis for approximation by reproducing kernel Hilbert spaces under some mild conditions on the probability measure for sampling allows us to deal with a general class of convex loss functions. 相似文献
8.
《Applied and Computational Harmonic Analysis》2020,48(3):868-890
In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space. We investigate a class of spectral/regularized algorithms, including ridge regression, principal component regression, and gradient methods. We prove optimal, high-probability convergence results in terms of variants of norms for the studied algorithms, considering a capacity assumption on the hypothesis space and a general source condition on the target function. Consequently, we obtain almost sure convergence results with optimal rates. Our results improve and generalize previous results, filling a theoretical gap for the non-attainable cases. 相似文献
9.
Dini derivatives in Riemannian manifold settings are studied in this paper. In addition, a characterization for Lipschitz and convex functions defined on Riemannian manifolds and sufficient optimality conditions for constraint optimization problems in terms of the Dini derivative are given. 相似文献
10.
Xuemei Dong 《Journal of Mathematical Analysis and Applications》2008,341(2):1018-1027
We propose a stochastic gradient descent algorithm for learning the gradient of a regression function from random samples of function values. This is a learning algorithm involving Mercer kernels. By a detailed analysis in reproducing kernel Hilbert spaces, we provide some error bounds to show that the gradient estimated by the algorithm converges to the true gradient, under some natural conditions on the regression function and suitable choices of the step size and regularization parameters. 相似文献
11.
The regularity of functions from reproducing kernel Hilbert spaces (RKHSs) is studied in the setting of learning theory. We provide a reproducing property for partial derivatives up to order s when the Mercer kernel is C2s. For such a kernel on a general domain we show that the RKHS can be embedded into the function space Cs. These observations yield a representer theorem for regularized learning algorithms involving data for function values and gradients. Examples of Hermite learning and semi-supervised learning penalized by gradients on data are considered. 相似文献
12.
The purpose of this paper is to study certain variational principles and Sobolev-type estimates for the approximation order
resulting from using strictly positive definite kernels to do generalized Hermite interpolation on a closed (i.e., no boundary),
compact, connected, orientable, m -dimensional C
∞
Riemannian manifold , with C
∞
metric g
ij
. The rate of approximation can be more fully analyzed with rates of approximation given in terms of Sobolev norms. Estimates
on the rate of convergence for generalized Hermite and other distributional interpolants can be obtained in certain circumstances
and, finally, the constants appearing in the approximation order inequalities are explicit. Our focus in this paper will be
on approximation rates in the cases of the circle, other tori, and the 2 -sphere.
April 10, 1996. Dates revised: March 26, 1997; August 26, 1997. Date accepted: September 12, 1997. Communicated by Ronald
A. DeVore. 相似文献
13.
We study an inverse problem for a non-compact Riemannian manifold whose ends have the following properties: On each end, the Riemannian metric is assumed to be a short-range perturbation of the metric of the form 2(dy)+h(x,dx), h(x,dx) being the metric of some compact manifold of codimension 1. Moreover one end is exactly cylindrical, i.e. the metric is equal to 2(dy)+h(x,dx). Given two such manifolds having the same scattering matrix on that exactly cylindrical end for all energies, we show that these two manifolds are isometric. 相似文献
14.
This is the fourth article of our series. Here, we study weighted norm inequalities for the Riesz transform of the Laplace–Beltrami operator on Riemannian manifolds and of subelliptic sum of squares on Lie groups, under the doubling volume property and Gaussian upper bounds. 相似文献
15.
Zhixiang Chen 《分析论及其应用》2007,23(4):325-333
The spherical approximation between two nested reproducing kernels Hilbert spaces generated from different smooth kernels is investigated. It is shown that the functions of a space can be approximated by that of the subspace with better smoothness. Furthermore, the upper bound of approximation error is given. 相似文献
16.
Yun-Long Feng 《Applicable analysis》2013,92(5):979-991
Least-squares regularized learning algorithms for regression were well-studied in the literature when the sampling process is independent and the regularization term is the square of the norm in a reproducing kernel Hilbert space (RKHS). Some analysis has also been done for dependent sampling processes or regularizers being the qth power of the function norm (q-penalty) with 0?q?≤?2. The purpose of this article is to conduct error analysis of the least-squares regularized regression algorithm when the sampling sequence is weakly dependent satisfying an exponentially decaying α-mixing condition and when the regularizer takes the q-penalty with 0?q?≤?2. We use a covering number argument and derive learning rates in terms of the α-mixing decay, an approximation condition and the capacity of balls of the RKHS. 相似文献
17.
The main purpose of the present paper is to employ spherical basis functions (SBFs) to study uniform distribution of points on spheres. We extend Weyl's criterion for uniform distribution of points on spheres to include a characterization in terms of an SBF. We show that every set of minimal energy points associated with an SBF is uniformly distributed on the spheres. We give an error estimate for numerical integration based on the minimal energy points. We also estimate the separation of the minimal energy points. 相似文献
18.
T. Matsuura 《Applicable analysis》2013,92(8):901-915
We shall discuss the relations among sampling theory (Sinc method), reproducing kernels and the Tikhonov regularization. Here, we see the important difference of the Sobolev Hilbert spaces and the Paley–Wiener spaces when we use their reproducing kernel Hibert spaces as approximate spaces in the Tikhonov regularization. Further, by using the Paley–Wiener spaces, we shall illustrate numerical experiments for new inversion formulas for the Gaussian convolution as a much more powerful and improved method by using computers. In this article, we shall be able to give practical numerical and analytical inversion formulas for the Gaussian convolution that is realized by computers. 相似文献
19.
20.
Explicit examples of finite subgroups of the group of homotopy classes of self-homotopy equivalences of some flat Riemannian manifolds which cannot be lifted to effective actions are given. It is also shown that no finite subgroups of the kernel of π0(Homeo(M))→Out π1(M) can be lifted back to Homeo(M), for a large class of flat manifolds M. Some results of an earlier paper by the authors are refined and related to recent work of R. Schoen and S.T. Yau. 相似文献