共查询到20条相似文献,搜索用时 10 毫秒
1.
In this paper, we consider the robust regression problem associated with Huber loss in the framework of functional linear model and reproducing kernel Hilbert spaces. We propose an Ivanov regularized empirical risk minimization estimation procedure to approximate the slope function of the linear model in the presence of outliers or heavy-tailed noises. By appropriately tuning the scale parameter of the Huber loss, we establish explicit rates of convergence for our estimates in terms of excess prediction risk under mild assumptions. Our study in the paper justifies the efficiency of Huber regression for functional data from a theoretical viewpoint. 相似文献
2.
This paper proposes a method to estimate the conditional quantile function using an epsilon-insensitive loss in a reproducing kernel Hilbert space. When choosing a smoothing parameter in nonparametric frameworks, it is necessary to evaluate the complexity of the model. In this regard, we provide a simple formula for computing an effective number of parameters when implementing an epsilon-insensitive loss. We also investigate the effects of the epsilon-insensitive loss. 相似文献
3.
5.
6.
《Applied and Computational Harmonic Analysis》2020,48(3):868-890
In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space. We investigate a class of spectral/regularized algorithms, including ridge regression, principal component regression, and gradient methods. We prove optimal, high-probability convergence results in terms of variants of norms for the studied algorithms, considering a capacity assumption on the hypothesis space and a general source condition on the target function. Consequently, we obtain almost sure convergence results with optimal rates. Our results improve and generalize previous results, filling a theoretical gap for the non-attainable cases. 相似文献
7.
In this paper, we consider unregularized online learning algorithms in a Reproducing Kernel Hilbert Space (RKHS). Firstly, we derive explicit convergence rates of the unregularized online learning algorithms for classification associated with a general α-activating loss (see Definition 1 below). Our results extend and refine the results in [30] for the least square loss and the recent result [3] for the loss function with a Lipschitz-continuous gradient. Moreover, we establish a very general condition on the step sizes which guarantees the convergence of the last iterate of such algorithms. Secondly, we establish, for the first time, the convergence of the unregularized pairwise learning algorithm with a general loss function and derive explicit rates under the assumption of polynomially decaying step sizes. Concrete examples are used to illustrate our main results. The main techniques are tools from convex analysis, refined inequalities of Gaussian averages [5], and an induction approach. 相似文献
8.
In the paper, a reproducing kernel method of solving singular integral equations (SIE) with cosecant kernel is proposed. For solving SIE, difficulties lie in its singular term. In order to remove singular term of SIE, an equivalent transformation is made. Compared with known investigations, its advantages are that the representation of exact solution is obtained in a reproducing kernel Hilbert space and accuracy in numerical computation is higher. On the other hand, the representation of reproducing kernel becomes simple by improving the definition of traditional inner product and requirements for image space of operators are weakened comparing with traditional reproducing kernel method. The final numerical experiments illustrate the method is efficient. 相似文献
9.
Quantile regression is used in many areas of applied research and business. Examples are actuarial, financial or biometrical applications. We show that a non‐parametric generalization of quantile regression based on kernels shares with support vector machines the property of consistency to the Bayes risk. We further use this consistency to prove that the non‐parametric generalization approximates the conditional quantile function which gives the mathematical justification for kernel‐based quantile regression. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
10.
An approach for solving Fredholm integral equations of the first kind is proposed for in a reproducing kernel Hilbert space (RKHS). The interest in this problem is strongly motivated by applications to actual prospecting. In many applications one is puzzled by an ill-posed problem in space C[a,b] or L2[a,b], namely, measurements of the experimental data can result in unbounded errors of solutions of the equation. In this work, the representation of solutions for Fredholm integral equations of the first kind is obtained if there are solutions and the stability of solutions is discussed in RKHS. At the same time, a conclusion is obtained that approximate solutions are also stable with respect to ∞ or L2 in RKHS. A numerical experiment shows that the method given in the work is valid. 相似文献
11.
In this paper, we present sharp estimates for the covering numbers of the embedding of the reproducing kernel Hilbert space (RKHS) associated with the Weierstrass fractal kernel into the space of continuous functions. The method we apply is based on the characterization of the infinite-dimensional RKHS generated by the Weierstrass fractal kernel and it requires estimates for the norm operator of orthogonal projections on the RKHS. 相似文献
12.
Genevera I. Allen 《Journal of computational and graphical statistics》2013,22(2):284-299
Selecting important features in nonlinear kernel spaces is a difficult challenge in both classification and regression problems. This article proposes to achieve feature selection by optimizing a simple criterion: a feature-regularized loss function. Features within the kernel are weighted, and a lasso penalty is placed on these weights to encourage sparsity. This feature-regularized loss function is minimized by estimating the weights in conjunction with the coefficients of the original classification or regression problem, thereby automatically procuring a subset of important features. The algorithm, KerNel Iterative Feature Extraction (KNIFE), is applicable to a wide variety of kernels and high-dimensional kernel problems. In addition, a modification of KNIFE gives a computationally attractive method for graphically depicting nonlinear relationships between features by estimating their feature weights over a range of regularization parameters. The utility of KNIFE in selecting features through simulations and examples for both kernel regression and support vector machines is demonstrated. Feature path realizations also give graphical representations of important features and the nonlinear relationships among variables. Supplementary materials with computer code and an appendix on convergence analysis are available online. 相似文献
13.
In [1], [2], [3], [4], [5], [6], [7] and [8], it is very difficult to get reproducing kernel space of problem (1). This paper is concerned with a new algorithm for giving the analytical and approximate solutions of a class of fourth-order in the new reproducing kernel space. The numerical results are compared with both the exact solution and its n-order derived functions in the example. It is demonstrated that the new method is quite accurate and efficient for fourth-order problems. 相似文献
14.
Christophe Crambes Ali Gannoun Yousri Henchiri 《Statistics & probability letters》2011,81(12):1847-1858
This paper deals with a nonparametric estimation of conditional quantile regression when the explanatory variable X takes its values in a bounded subspace of a functional space X and the response Y takes its values in a compact of the space Y?R. The functional observations, X1,…,Xn, are projected onto a finite dimensional subspace having a suitable orthonormal system. The Xi’s will be characterized by their coordinates in this basis. We perform the Support Vector Machine Quantile Regression approach in finite dimension with the selected coefficients. Then we establish weak consistency of this estimator. The various parameters needed for the construction of this estimator are automatically selected by data-splitting and by penalized empirical risk minimization. 相似文献
15.
16.
17.
In this article we study reproducing kernel Hilbert spaces (RKHS) associated with translation-invariant Mercer kernels. Applying
a special derivative reproducing property, we show that when the kernel is real analytic, every function from the RKHS is
real analytic. This is used to investigate subspaces of the RKHS generated by a set of fundamental functions. The analyticity
of functions from the RKHS enables us to derive some estimates for the covering numbers which form an essential part for the
analysis of some algorithms in learning theory.
The work is supported by City University of Hong Kong (Project No. 7001816), and National Science Fund for Distinguished Young
Scholars of China (Project No. 10529101). 相似文献
18.
In this paper, we study several radial basis function approximation schemes in Sobolev spaces. We obtain an optional error estimate by using a class of smoothing operators. We also discussed sufficient conditions for the smoothing operators to attain the desired approximation order. We then construct the smoothing operators by some compactly supported radial kernels, and use them to approximate Sobolev space functions with optimal convergence order. These kernels can be simply constructed and readily applied to practical problems. The results show that the approximation power depends on the precision of the sampling instrument and the density of the available data. 相似文献
19.
This paper investigates the forced Duffing equation with integral boundary conditions. Its approximate solution is developed by combining the homotopy perturbation method (HPM) and the reproducing kernel Hilbert space method (RKHSM). HPM is based on the use of the traditional perturbation method and the homotopy technique. The HPM can reduce nonlinear problems to some linear problems and generate a rapid convergent series solution in most cases. RKHSM is also an analytical technique, which can solve powerfully linear boundary value problems. Therefore, the forced Duffing equation with integral boundary conditions can be solved using advantages of these two methods. Two numerical examples are presented to illustrate the strength of the method. 相似文献
20.
Xuemei Dong 《Journal of Mathematical Analysis and Applications》2008,341(2):1018-1027
We propose a stochastic gradient descent algorithm for learning the gradient of a regression function from random samples of function values. This is a learning algorithm involving Mercer kernels. By a detailed analysis in reproducing kernel Hilbert spaces, we provide some error bounds to show that the gradient estimated by the algorithm converges to the true gradient, under some natural conditions on the regression function and suitable choices of the step size and regularization parameters. 相似文献