共查询到20条相似文献,搜索用时 15 毫秒
3.
A standard assumption in theoretical study of learning algorithms for regression is uniform boundedness of output sample values. This excludes the common case with Gaussian noise. In this paper we investigate the learning algorithm for regression generated by the least squares regularization scheme in reproducing kernel Hilbert spaces without the assumption of uniform boundedness for sampling. By imposing some incremental conditions on moments of the output variable, we derive learning rates in terms of regularity of the regression function and capacity of the hypothesis space. The novelty of our analysis is a new covering number argument for bounding the sample error. 相似文献
4.
This paper considers the regularized learning algorithm associated
with the least-square loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem
in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend
on the approximation property and on the capacity of the reproducing kernel Hilbert space measured by covering numbers. When
the kernel is C ∞ and the regression function lies in the corresponding reproducing kernel Hilbert space, the rate is m ζ with ζ arbitrarily close to 1, regardless of the variance of the bounded probability distribution. 相似文献
5.
Let f, g be entire functions. If there exist M1, M2>0 such that | f( z)|? M1| g( z)| whenever | z|> M2 we say that f? g. Let X be a reproducing Hilbert space with an orthogonal basis . We say that X is an ordered reproducing Hilbert space (or X is ordered) if f? g and g∈ X imply f∈ X. In this note, we show that if then X is ordered; if then X is not ordered. In the case , there are examples to show that X can be of order or opposite. 相似文献
6.
This paper proposes a method to estimate the conditional quantile function using an epsilon-insensitive loss in a reproducing kernel Hilbert space. When choosing a smoothing parameter in nonparametric frameworks, it is necessary to evaluate the complexity of the model. In this regard, we provide a simple formula for computing an effective number of parameters when implementing an epsilon-insensitive loss. We also investigate the effects of the epsilon-insensitive loss. 相似文献
7.
We describe how to use Schoenberg’s theorem for a radial kernel combined with existing bounds on the approximation error functions for Gaussian kernels to obtain a bound on the approximation error function for the radial kernel. The result is applied to the exponential kernel and Student’s kernel. To establish these results we develop a general theory regarding mixtures of kernels. We analyze the reproducing kernel Hilbert space (RKHS) of the mixture in terms of the RKHS’s of the mixture components and prove a type of Jensen inequality between the approximation error function for the mixture and the approximation error functions of the mixture components. 相似文献
8.
In this paper, we consider unregularized online learning algorithms in a Reproducing Kernel Hilbert Space (RKHS). Firstly, we derive explicit convergence rates of the unregularized online learning algorithms for classification associated with a general α-activating loss (see Definition 1 below). Our results extend and refine the results in [30] for the least square loss and the recent result [3] for the loss function with a Lipschitz-continuous gradient. Moreover, we establish a very general condition on the step sizes which guarantees the convergence of the last iterate of such algorithms. Secondly, we establish, for the first time, the convergence of the unregularized pairwise learning algorithm with a general loss function and derive explicit rates under the assumption of polynomially decaying step sizes. Concrete examples are used to illustrate our main results. The main techniques are tools from convex analysis, refined inequalities of Gaussian averages [5], and an induction approach. 相似文献
9.
In this article we study reproducing kernel Hilbert spaces (RKHS) associated with translation-invariant Mercer kernels. Applying
a special derivative reproducing property, we show that when the kernel is real analytic, every function from the RKHS is
real analytic. This is used to investigate subspaces of the RKHS generated by a set of fundamental functions. The analyticity
of functions from the RKHS enables us to derive some estimates for the covering numbers which form an essential part for the
analysis of some algorithms in learning theory.
The work is supported by City University of Hong Kong (Project No. 7001816), and National Science Fund for Distinguished Young
Scholars of China (Project No. 10529101). 相似文献
11.
We give in terms of reproducing kernel and Berezin symbol the sufficient conditions ensuring the invertibility of some linear bounded operators on some functional Hilbert spaces. 相似文献
12.
In this paper, we discuss sampling and reconstruction of signals in the weighted reproducing kernel space associated with an idempotent integral operator. We show that any signal in that space can be stably reconstructed from its weighted samples taken on a relatively-separated set with sufficiently small gap. We also develop an iterative reconstruction algorithm for the reconstruction of a signal from its weighted samples taken on a relatively-separated set with sufficiently small gap. 相似文献
13.
This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target
is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp
learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured
by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in with Borel probability measure.
相似文献
14.
A theorem of M. F. Driscoll says that, under certain restrictions, the probability that a given Gaussian process has its sample paths almost surely in a given reproducing kernel Hilbert space (RKHS) is either or . Driscoll also found a necessary and sufficient condition for that probability to be . Doing away with Driscoll's restrictions, R. Fortet generalized his condition and named it nuclear dominance. He stated a theorem claiming nuclear dominance to be necessary and sufficient for the existence of a process (not necessarily Gaussian) having its sample paths in a given RKHS. This theorem - specifically the necessity of the condition - turns out to be incorrect, as we will show via counterexamples. On the other hand, a weaker sufficient condition is available. Using Fortet's tools along with some new ones, we correct Fortet's theorem and then find the generalization of Driscoll's result. The key idea is that of a random element in a RKHS whose values are sample paths of a stochastic process. As in Fortet's work, we make almost no assumptions about the reproducing kernels we use, and we demonstrate the extent to which one may dispense with the Gaussian assumption. 相似文献
16.
We investigate reproducing kernel Hilbert spaces (RKHS) where two functions are orthogonal whenever they have disjoint support. Necessary and sufficient conditions in terms of feature maps for the reproducing kernel are established. We also present concrete examples of finite dimensional RKHS and RKHS with a translation invariant reproducing kernel. In particular, it is shown that a Sobolev space has the orthogonality from disjoint support property if and only if it is of integer index. 相似文献
18.
We study aspects of the analytic foundations of integration and closely related problems for functions of infinitely many variables x1, x2,…∈D . The setting is based on a reproducing kernel k for functions on D , a family of non-negative weights γu, where u varies over all finite subsets of N , and a probability measure ρ on D . We consider the weighted superposition K= ∑uγuku of finite tensor products ku of k . Under mild assumptions we show that K is a reproducing kernel on a properly chosen domain in the sequence space D N, and that the reproducing kernel Hilbert space H(K) is the orthogonal sum of the spaces H( γuku) . Integration on H(K) can be defined in two ways, via a canonical representer or with respect to the product measure ρ N on D N. We relate both approaches and provide sufficient conditions for the two approaches to coincide. 相似文献
19.
The recent development of compressed sensing seeks to extract information from as few samples as possible. In such applications, since the number of samples is restricted, one should deploy the sampling points wisely. We are motivated to study the optimal distribution of finite sampling points in reproducing kernel Hilbert spaces, the natural background function spaces for sampling. Formulation under the framework of optimal reconstruction yields a minimization problem. In the discrete measure case, we estimate the distance between the optimal subspace resulting from a general Karhunen–Loève transform and the kernel space to obtain another algorithm that is computationally favorable. Numerical experiments are then presented to illustrate the effectiveness of the algorithms for the searching of optimal sampling points. 相似文献
20.
An approach for solving Fredholm integral equations of the first kind is proposed for in a reproducing kernel Hilbert space (RKHS). The interest in this problem is strongly motivated by applications to actual prospecting. In many applications one is puzzled by an ill-posed problem in space C[a,b] or L2[a,b], namely, measurements of the experimental data can result in unbounded errors of solutions of the equation. In this work, the representation of solutions for Fredholm integral equations of the first kind is obtained if there are solutions and the stability of solutions is discussed in RKHS. At the same time, a conclusion is obtained that approximate solutions are also stable with respect to ∞ or L2 in RKHS. A numerical experiment shows that the method given in the work is valid. 相似文献
|