首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
Learning with coefficient-based regularization has attracted a considerable amount of attention in recent years, on both theoretical analysis and applications. In this paper, we study coefficient-based learning scheme (CBLS) for regression problem with l q -regularizer (1 < q ? 2). Our analysis is conducted under more general conditions, and particularly the kernel function is not necessarily positive definite. This paper applies concentration inequality with l 2-empirical covering numbers to present an elaborate capacity dependence analysis for CBLS, which yields sharper estimates than existing bounds. Moreover, we estimate the regularization error to support our assumptions in error analysis, also provide an illustrative example to further verify the theoretical results.  相似文献   

2.
The quantile regression problem is considered by learning schemes based on ? 1—regularization and Gaussian kernels. The purpose of this paper is to present concentration estimates for the algorithms. Our analysis shows that the convergence behavior of ? 1—quantile regression with Gaussian kernels is almost the same as that of the RKHS-based learning schemes. Furthermore, the previous analysis for kernel-based quantile regression usually requires that the output sample values are uniformly bounded, which excludes the common case with Gaussian noise. Our error analysis presented in this paper can give satisfactory convergence rates even for unbounded sampling processes. Besides, numerical experiments are given which support the theoretical results.  相似文献   

3.
In this paper, we give several results of learning errors for linear programming support vector regression. The corresponding theorems are proved in the reproducing kernel Hilbert space. With the covering number, the approximation property and the capacity of the reproducing kernel Hilbert space are measured. The obtained result (Theorem 2.1) shows that the learning error can be controlled by the sample error and regularization error. The mentioned sample error is summarized by the errors of learning regression function and regularizing function in the reproducing kernel Hilbert space. After estimating the generalization error of learning regression function (Theorem 2.2), the upper bound (Theorem 2.3) of the regularized learning algorithm associated with linear programming support vector regression is estimated.  相似文献   

4.
A standard assumption in theoretical study of learning algorithms for regression is uniform boundedness of output sample values. This excludes the common case with Gaussian noise. In this paper we investigate the learning algorithm for regression generated by the least squares regularization scheme in reproducing kernel Hilbert spaces without the assumption of uniform boundedness for sampling. By imposing some incremental conditions on moments of the output variable, we derive learning rates in terms of regularity of the regression function and capacity of the hypothesis space. The novelty of our analysis is a new covering number argument for bounding the sample error.  相似文献   

5.
In this paper we establish the error estimates for multi-penalty regularization under the general smoothness assumption in the context of learning theory. One of the motivation for this work is to study the convergence analysis of two-parameter regularization theoretically in the manifold learning setting. In this spirit, we obtain the error bounds for the manifold learning problem using more general framework of multi-penalty regularization. We propose a new parameter choice rule “the balanced-discrepancy principle” and analyze the convergence of the scheme with the help of estimated error bounds. We show that multi-penalty regularization with the proposed parameter choice exhibits the convergence rates similar to single-penalty regularization. Finally on a series of test samples we demonstrate the superiority of multi-parameter regularization over single-penalty regularization.  相似文献   

6.
This paper addresses the learning algorithm on the unit sphere. The main purpose is to present an error analysis for regression generated by regularized least square algorithms with spherical harmonics kernel. The excess error can be estimated by the sum of sample errors and regularization errors. Our study shows that by introducing a suitable spherical harmonics kernel, the regularization parameter can decrease arbitrarily fast with the sample size.  相似文献   

7.
We propose a stochastic gradient descent algorithm for learning the gradient of a regression function from random samples of function values. This is a learning algorithm involving Mercer kernels. By a detailed analysis in reproducing kernel Hilbert spaces, we provide some error bounds to show that the gradient estimated by the algorithm converges to the true gradient, under some natural conditions on the regression function and suitable choices of the step size and regularization parameters.  相似文献   

8.
This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in with Borel probability measure.   相似文献   

9.
In this paper, we propose a two-step kernel learning method based on the support vector regression (SVR) for financial time series forecasting. Given a number of candidate kernels, our method learns a sparse linear combination of these kernels so that the resulting kernel can be used to predict well on future data. The L 1-norm regularization approach is used to achieve kernel learning. Since the regularization parameter must be carefully selected, to facilitate parameter tuning, we develop an efficient solution path algorithm that solves the optimal solutions for all possible values of the regularization parameter. Our kernel learning method has been applied to forecast the S&P500 and the NASDAQ market indices and showed promising results.  相似文献   

10.
In this paper we study conditional quantile regression by learning algorithms generated from Tikhonov regularization schemes associated with pinball loss and varying Gaussian kernels. Our main goal is to provide convergence rates for the algorithm and illustrate differences between the conditional quantile regression and the least square regression. Applying varying Gaussian kernels improves the approximation ability of the algorithm. Bounds for the sample error are achieved by using a projection operator, a variance-expectation bound derived from a condition on conditional distributions and a tight bound for the covering numbers involving the Gaussian kernels.  相似文献   

11.
The least-square regression problem is considered by regularization schemes in reproducing kernel Hilbert spaces. The learning algorithm is implemented with samples drawn from unbounded sampling processes. The purpose of this paper is to present concentration estimates for the error based on ?2-empirical covering numbers, which improves learning rates in the literature.  相似文献   

12.
The regression problem in learning theory is investigated with least square Tikhonov regularization schemes in reproducing kernel Hilbert spaces (RKHS). We follow our previous work and apply the sampling operator to the error analysis in both the RKHS norm and the L2 norm. The tool for estimating the sample error is a Bennet inequality for random variables with values in Hilbert spaces. By taking the Hilbert space to be the one consisting of Hilbert-Schmidt operators in the RKHS, we improve the error bounds in the L2 metric, motivated by an idea of Caponnetto and de Vito. The error bounds we derive in the RKHS norm, together with a Tsybakov function we discuss here, yield interesting applications to the error analysis of the (binary) classification problem, since the RKHS metric controls the one for the uniform convergence.  相似文献   

13.
Learning Rates of Least-Square Regularized Regression   总被引:1,自引:0,他引:1  
This paper considers the regularized learning algorithm associated with the least-square loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend on the approximation property and on the capacity of the reproducing kernel Hilbert space measured by covering numbers. When the kernel is C and the regression function lies in the corresponding reproducing kernel Hilbert space, the rate is mζ with ζ arbitrarily close to 1, regardless of the variance of the bounded probability distribution.  相似文献   

14.
Elastic-net regularization in learning theory   总被引:1,自引:0,他引:1  
Within the framework of statistical learning theory we analyze in detail the so-called elastic-net regularization scheme proposed by Zou and Hastie [H. Zou, T. Hastie, Regularization and variable selection via the elastic net, J. R. Stat. Soc. Ser. B, 67(2) (2005) 301–320] for the selection of groups of correlated variables. To investigate the statistical properties of this scheme and in particular its consistency properties, we set up a suitable mathematical framework. Our setting is random-design regression where we allow the response variable to be vector-valued and we consider prediction functions which are linear combinations of elements (features) in an infinite-dimensional dictionary. Under the assumption that the regression function admits a sparse representation on the dictionary, we prove that there exists a particular “elastic-net representation” of the regression function such that, if the number of data increases, the elastic-net estimator is consistent not only for prediction but also for variable/feature selection. Our results include finite-sample bounds and an adaptive scheme to select the regularization parameter. Moreover, using convex analysis tools, we derive an iterative thresholding algorithm for computing the elastic-net solution which is different from the optimization procedure originally proposed in the above-cited work.  相似文献   

15.
In regularized kernel methods, the solution of a learning problem is found by minimizing a functional consisting of a empirical risk and a regularization term. In this paper, we study the existence of optimal solution of multi-kernel regularization learning. First, we ameliorate a previous conclusion about this problem given by Micchelli and Pontil, and prove that the optimal solution exists whenever the kernel set is a compact set. Second, we consider this problem for Gaussian kernels with variance σ∈(0,∞), and give some conditions under which the optimal solution exists.  相似文献   

16.
In this paper, we propose a two-step kernel learning method based on the support vector regression (SVR) for financial time series forecasting. Given a number of candidate kernels, our method learns a sparse linear combination of these kernels so that the resulting kernel can be used to predict well on future data. The L 1-norm regularization approach is used to achieve kernel learning. Since the regularization parameter must be carefully selected, to facilitate parameter tuning, we develop an efficient solution path algorithm that solves the optimal solutions for all possible values of the regularization parameter. Our kernel learning method has been applied to forecast the S&P500 and the NASDAQ market indices and showed promising results.  相似文献   

17.
In this paper, we study the multi-parameter Tikhonov regularization method which adds multiple different penalties to exhibit multi-scale features of the solution. An optimal error bound of the regularization solution is obtained by a priori choice of multiple regularization parameters. Some theoretical results of the regularization solution about the dependence on regularization parameters are presented. Then, an a posteriori parameter choice, i.e., the damped Morozov discrepancy principle, is introduced to determine multiple regularization parameters. Five model functions, i.e., two hyperbolic model functions, a linear model function, an exponential model function and a logarithmic model function, are proposed to solve the damped Morozov discrepancy principle. Furthermore, four efficient model function algorithms are developed for finding reasonable multiple regularization parameters, and their convergence properties are also studied. Numerical results of several examples show that the damped discrepancy principle is competitive with the standard one, and the model function algorithms are efficient for choosing regularization parameters.  相似文献   

18.
In this paper, we deal with nonlinear ill-posed problems involving m-accretive mappings in Banach spaces. We consider a derivative and inverse free method for the implementation of Lavrentiev regularization method. Using general H¨older type source condition we obtain an optimal order error estimate. Also we consider the adaptive parameter choice strategy proposed by Pereverzev and Schock(2005) for choosing the regularization parameter.  相似文献   

19.
We discuss the problem of parameter choice in learning algorithms generated by a general regularization scheme. Such a scheme covers well-known algorithms as regularized least squares and gradient descent learning. It is known that in contrast to classical deterministic regularization methods, the performance of regularized learning algorithms is influenced not only by the smoothness of a target function, but also by the capacity of a space, where regularization is performed. In the infinite dimensional case the latter one is usually measured in terms of the effective dimension. In the context of supervised learning both the smoothness and effective dimension are intrinsically unknown a priori. Therefore we are interested in a posteriori regularization parameter choice, and we propose a new form of the balancing principle. An advantage of this strategy over the known rules such as cross-validation based adaptation is that it does not require any data splitting and allows the use of all available labeled data in the construction of regularized approximants. We provide the analysis of the proposed rule and demonstrate its advantage in simulations.  相似文献   

20.
In this study we prove a stability estimate for an inverse heat source problem in the n-dimensional case. We present a revised generalized Tikhonov regularization and obtain an error estimate. Numerical experiments for the one-dimensional and two-dimensional cases show that the revised generalized Tikhonov regularization works well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号