Abstract: | This paper investigates the approximation of multivariate functions from data via linear combinations of translates of a positive
definite kernel from a reproducing kernel Hilbert space. If standard interpolation conditions are relaxed by Chebyshev-type
constraints, one can minimize the norm of the approximant in the Hilbert space under these constraints. By standard arguments
of optimization theory, the solutions will take a simple form, based on the data related to the active constraints, called
support vectors in the context of machine learning. The corresponding quadratic programming problems are investigated to some
extent. Using monotonicity results concerning the Hilbert space norm, iterative techniques based on small quadratic subproblems
on active sets are shown to be finite, even if they drop part of their previous information and even if they are used for
infinite data, e.g., in the context of online learning. Numerical experiments confirm the theoretical results.
Dedicated to C.A. Micchelli at the occasion of his 60th birthday
Mathematics subject classifications (2000) 65D05, 65D10, 41A15, 41A17, 41A27, 41A30, 41A40, 41A63. |