首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is well known that representations of kernel-based approximants in terms of the standard basis of translated kernels are notoriously unstable. To come up with a more useful basis, we adopt the strategy known from Newton’s interpolation formula, using generalized divided differences and a recursively computable set of basis functions vanishing at increasingly many data points. The resulting basis turns out to be orthogonal in the Hilbert space in which the kernel is reproducing, and under certain assumptions it is complete and allows convergent expansions of functions into series of interpolants. Some numerical examples show that the Newton basis is much more stable than the standard basis of kernel translates.  相似文献   

2.
Multiscale kernels are a new type of positive definite reproducing kernels in Hilbert spaces. They are constructed by a superposition of shifts and scales of a single refinable function and were introduced in the paper of R. Opfer [Multiscale kernels, Adv. Comput. Math. (2004), in press]. By applying standard reconstruction techniques occurring in radial basis function- or machine learning theory, multiscale kernels can be used to reconstruct multivariate functions from scattered data. The multiscale structure of the kernel allows to represent the approximant on several levels of detail or accuracy. In this paper we prove that multiscale kernels are often reproducing kernels in Sobolev spaces. We use this fact to derive error bounds. The set of functions used for the construction of the multiscale kernel will turn out to be a frame in a Sobolev space of certain smoothness. We will establish that the frame coefficients of approximants can be computed explicitly. In our case there is neither a need to compute the inverse of the frame operator nor is there a need to compute inner products in the Sobolev space. Moreover we will prove that a recursion formula between the frame coefficients of different levels holds. We present a bivariate numerical example illustrating the mutiresolution and data compression effect.  相似文献   

3.
In this paper we derive several new results involving matrix-valued radial basis functions (RBFs). We begin by introducing a class of matrix-valued RBFs which can be used to construct interpolants that are curl-free. Next, we offer a characterization of the native space for divergence-free and curl-free kernels based on the Fourier transform. Finally, we investigate the stability of the interpolation matrix for both the divergence-free and curl-free cases, and when the kernel has finite smoothness we obtain sharp estimates. An erratum to this article can be found at  相似文献   

4.
Motivated by applications to machine learning, we construct a reversible and irreducible Markov chain whose state space is a certain collection of measurable sets of a chosen l.c.h. space X. We study the resulting network (connected undirected graph), including transience, Royden and Riesz decompositions, and kernel factorization. We describe a construction for Hilbert spaces of signed measures which comes equipped with a new notion of reproducing kernels and there is a unique solution to a regularized optimization problem involving the approximation of L2 functions by functions of finite energy. The latter has applications to machine learning (for Markov random fields, for example).  相似文献   

5.
It is often observed that interpolation based on translates of radial basis functions or non-radial kernels is numerically unstable due to exceedingly large condition of the kernel matrix. But if stability is assessed in function space without considering special bases, this paper proves that kernel-based interpolation is stable. Provided that the data are not too wildly scattered, the L 2 or L  ∞  norms of interpolants can be bounded above by discrete ℓ2 and ℓ ∞  norms of the data. Furthermore, Lagrange basis functions are uniformly bounded and Lebesgue constants grow at most like the square root of the number of data points. However, this analysis applies only to kernels of limited smoothness. Numerical examples support our bounds, but also show that the case of infinitely smooth kernels must lead to worse bounds in future work, while the observed Lebesgue constants for kernels with limited smoothness even seem to be independent of the sample size and the fill distance.  相似文献   

6.
Periodic spline interpolation in Euclidian spaceR d is studied using translates of multivariate Bernoulli splines introduced in [25]. The interpolating polynomial spline functions are characterized by a minimal norm property among all interpolants in a Hilbert space of Sobolev type. The results follow from a relation between multivariate Bernoulli splines and the reproducing kernel of this Hilbert space. They apply to scattered data interpolation as well as to interpolation on a uniform grid. For bivariate three-directional Bernoulli splines the approximation order of the interpolants on a refined uniform mesh is computed.  相似文献   

7.
In this article we study reproducing kernel Hilbert spaces (RKHS) associated with translation-invariant Mercer kernels. Applying a special derivative reproducing property, we show that when the kernel is real analytic, every function from the RKHS is real analytic. This is used to investigate subspaces of the RKHS generated by a set of fundamental functions. The analyticity of functions from the RKHS enables us to derive some estimates for the covering numbers which form an essential part for the analysis of some algorithms in learning theory. The work is supported by City University of Hong Kong (Project No. 7001816), and National Science Fund for Distinguished Young Scholars of China (Project No. 10529101).  相似文献   

8.
A certain kernel (sometimes called the Pick kernel) associated to Schur functions on the disk is always positive semi-definite. A generalization of this fact is well-known for Schur functions on the polydisk. In this article, we show that the “Pick kernel” on the polydisk has a great deal of structure beyond being positive semi-definite. It can always be split into two kernels possessing certain shift invariance properties.  相似文献   

9.
We ask when convolution operators with scalar- or operator-valued kernel functions map between weighted L2 spaces of Hilbert space-valued functions. For a certain class of decreasing weights, including negative powers (ta)m for example, we solve the one-weight problem completely by using Laplace transforms and Bergman-type spaces of vector-valued analytic functions. For a much more general class of decreasing weights, we solve the one-weight problem for all positive real kernels (also for Lp(w) with p > 1), by results on Steklov operators which generalise the weighted Hardy inequality. When the kernel function is a strongly continuous semigroup of bounded linear Hilbert space operators, which arises from input–output maps of certain linear systems, then the most obvious sufficient condition for boundedness, obtained by taking norm signs inside the integrals, is also necessary in many cases, but not in general. Submitted: July 15, 2007.,Revised: November 19, 2007.,Accepted: December 14, 2007.  相似文献   

10.
In Machine Learning algorithms, one of the crucial issues is the representation of the data. As the given data source become heterogeneous and the data are large-scale, multiple kernel methods help to classify “nonlinear data”. Nevertheless, the finite combinations of kernels are limited up to a finite choice. In order to overcome this discrepancy, a novel method of  “infinite”  kernel combinations is proposed with the help of infinite and semi-infinite programming regarding all elements in kernel space. Looking at all infinitesimally fine convex combinations of the kernels from the infinite kernel set, the margin is maximized subject to an infinite number of constraints with a compact index set and an additional (Riemann–Stieltjes) integral constraint due to the combinations. After a parametrization in the space of probability measures, it becomes semi-infinite. We adapt well-known numerical methods to our infinite kernel learning model and analyze the existence of solutions and convergence for the given algorithms. We implement our new algorithm called “infinite” kernel learning (IKL) on heterogenous data sets by using exchange method and conceptual reduction method, which are well known numerical techniques from solve semi-infinite programming. The results show that our IKL approach improves the classifaction accuracy efficiently on heterogeneous data compared to classical one-kernel approaches.  相似文献   

11.
The notion of sk-spline is generalised to arbitrary compact Abelian groups. A class of conditionally positive definite kernels on the group is identified, and a subclass corresponding to the generalised sk-spline is used for constructing interpolants, on scattered data, to continuous functions on the group. The special case ofd-dimensional torus is considered and convergence rates are proved when the kernel is a product of one-dimensional kernels, and the data are gridded.  相似文献   

12.
Motivated by the need of processing non-point-evaluation functional data, we introduce the notion of functional reproducing kernel Hilbert spaces (FRKHSs). This space admits a unique functional reproducing kernel which reproduces a family of continuous linear functionals on the space. The theory of FRKHSs and the associated functional reproducing kernels are established. A special class of FRKHSs, which we call the perfect FRKHSs, are studied, which reproduce the family of the standard point-evaluation functionals and at the same time another different family of continuous linear (non-point-evaluation) functionals. The perfect FRKHSs are characterized in terms of features, especially for those with respect to integral functionals. In particular, several specific examples of the perfect FRKHSs are presented. We apply the theory of FRKHSs to sampling and regularized learning, where non-point-evaluation functional data are used. Specifically, a general complete reconstruction formula from linear functional values is established in the framework of FRKHSs. The average sampling and the reconstruction of vector-valued functions are considered in specific FRKHSs. We also investigate in the FRKHS setting the regularized learning schemes, which learn a target element from non-point-evaluation functional data. The desired representer theorems of the learning problems are established to demonstrate the key roles played by the FRKHSs and the functional reproducing kernels in machine learning from non-point-evaluation functional data. We finally illustrate that the continuity of linear functionals, used to obtain the non-point-evaluation functional data, on an FRKHS is necessary for the stability of the numerical reconstruction algorithm using the data.  相似文献   

13.
14.
In this paper, we describe a recursive method for computing interpolants defined in a space spanned by a finite number of continuous functions in RdRd. We apply this method to construct several interpolants such as spline interpolants, tensor product interpolants and multivariate polynomial interpolants. We also give a simple algorithm for solving a multivariate polynomial interpolation problem and constructing the minimal interpolation space for a given finite set of interpolation points.  相似文献   

15.
This paper reconstructs multivariate functions from scattered data by a new multiscale technique. The reconstruction uses standard methods of interpolation by positive definite reproducing kernels in Hilbert spaces. But it adopts techniques from wavelet theory and shift-invariant spaces to construct a new class of kernels as multiscale superpositions of shifts and scales of a single compactly supported function φ. This means that the advantages of scaled regular grids are used to construct the kernels, while the advantages of unrestricted scattered data interpolation are maintained after the kernels are constructed. Using such a multiscale kernel, the reconstruction method interpolates at given scattered data. No manipulations of the data (e.g., thinning or separation into subsets of certain scales) are needed. Then, the multiscale structure of the kernel allows to represent the interpolant on regular grids on all scales involved, with cheap evaluation due to the compact support of the function φ, and with a recursive evaluation technique if φ is chosen to be refinable. There also is a wavelet-like data reduction effect, if a suitable thresholding strategy is applied to the coefficients of the interpolant when represented over a scaled grid. Various numerical examples are presented, illustrating the multiresolution and data compression effects.  相似文献   

16.
Reproducing Kernel Hilbert Spaces (RKHSs) are a very useful and powerful tool of functional analysis with application in many diverse paradigms, such as multivariate statistics and machine learning. Fractal interpolation, on the other hand, is a relatively recent technique that generalizes traditional interpolation through the introduction of self-similarity. In this work we show that the functional space of any family of (recurrent) fractal interpolation functions ((R)FIFs) constitutes an RKHS with a specific associated kernel function, thus, extending considerably the toolbox of known kernel functions and introducing fractals to the RKHS world. We also provide the means for the computation of the kernel function that corresponds to any specific fractal RKHS and give several examples.  相似文献   

17.
Motivated by the importance of kernel-based methods for multi-task learning, we provide here a complete characterization of multi-task finite rank kernels in terms of the positivity of what we call its associated characteristic operator. Consequently, we are led to establishing that every continuous multi-task kernel, defined on a cube in an Euclidean space, not only can be uniformly approximated by multi-task polynomial kernels, but also can be extended as a multi-task kernel to all of the Euclidean space. Finally, we discuss the interpolation of multi-task kernels by multi-task finite rank kernels.  相似文献   

18.
In this paper we discuss necessary conditions and sufficient conditions for the compression of an analytic Toeplitz operator onto a shift coinvariant subspace to have nontrivial reducing subspaces. We give necessary and sufficient conditions for the kernel of a Toeplitz operator whose symbol is the quotient of two inner functions to be nontrivial and obtain examples of reducing subspaces from these kernels. Motivated by this result we give necessary conditions and sufficient conditions for the kernel of a Toeplitz operator whose symbol is the quotient of two inner functions to be nontrivial in terms of the supports of the two inner functions. By studying the commutant of a compression, we are able to give a necessary condition for the existence of reducing subspaces on certain shift coinvariant subspaces.  相似文献   

19.
Integral equations of first kind with periodic kernels arising in solving partial differential equations by interior source methods are considered. Existence and uniqueness of solution in appropriate spaces of linear analytic functionals is proved. Rate of convergence of collocation method with Dirac’s delta-functions as the trial functions is obtained in case of uniform meshes. In case of an analytic kernel the convergence rate is exponential.  相似文献   

20.
Extreme learning machine (ELM) not only is an effective classifier in supervised learning, but also can be applied on unsupervised learning and semi-supervised learning. The model structure of unsupervised extreme learning machine (US-ELM) and semi-supervised extreme learning machine (SS-ELM) are same as ELM, the difference between them is the cost function. We introduce kernel function to US-ELM and propose unsupervised extreme learning machine with kernel (US-KELM). And SS-KELM has been proposed. Wavelet analysis has the characteristics of multivariate interpolation and sparse change, and Wavelet kernel functions have been widely used in support vector machine. Therefore, to realize a combination of the wavelet kernel function, US-ELM, and SS-ELM, unsupervised extreme learning machine with wavelet kernel function (US-WKELM) and semi-supervised extreme learning machine with wavelet kernel function (SS-WKELM) are proposed in this paper. The experimental results show the feasibility and validity of US-WKELM and SS-WKELM in clustering and classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号