首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
In this paper, we study the multi-parameter Tikhonov regularization method which adds multiple different penalties to exhibit multi-scale features of the solution. An optimal error bound of the regularization solution is obtained by a priori choice of multiple regularization parameters. Some theoretical results of the regularization solution about the dependence on regularization parameters are presented. Then, an a posteriori parameter choice, i.e., the damped Morozov discrepancy principle, is introduced to determine multiple regularization parameters. Five model functions, i.e., two hyperbolic model functions, a linear model function, an exponential model function and a logarithmic model function, are proposed to solve the damped Morozov discrepancy principle. Furthermore, four efficient model function algorithms are developed for finding reasonable multiple regularization parameters, and their convergence properties are also studied. Numerical results of several examples show that the damped discrepancy principle is competitive with the standard one, and the model function algorithms are efficient for choosing regularization parameters.  相似文献   

2.
We discuss the problem of parameter choice in learning algorithms generated by a general regularization scheme. Such a scheme covers well-known algorithms as regularized least squares and gradient descent learning. It is known that in contrast to classical deterministic regularization methods, the performance of regularized learning algorithms is influenced not only by the smoothness of a target function, but also by the capacity of a space, where regularization is performed. In the infinite dimensional case the latter one is usually measured in terms of the effective dimension. In the context of supervised learning both the smoothness and effective dimension are intrinsically unknown a priori. Therefore we are interested in a posteriori regularization parameter choice, and we propose a new form of the balancing principle. An advantage of this strategy over the known rules such as cross-validation based adaptation is that it does not require any data splitting and allows the use of all available labeled data in the construction of regularized approximants. We provide the analysis of the proposed rule and demonstrate its advantage in simulations.  相似文献   

3.
This paper addresses the learning algorithm on the unit sphere. The main purpose is to present an error analysis for regression generated by regularized least square algorithms with spherical harmonics kernel. The excess error can be estimated by the sum of sample errors and regularization errors. Our study shows that by introducing a suitable spherical harmonics kernel, the regularization parameter can decrease arbitrarily fast with the sample size.  相似文献   

4.
The stable solution of ill-posed non-linear operator equations in Banach space requires regularization. One important approach is based on Tikhonov regularization, in which case a one-parameter family of regularized solutions is obtained. It is crucial to choose the parameter appropriately. Here, a sequential variant of the discrepancy principle is analysed. In many cases, such parameter choice exhibits the feature, called regularization property below, that the chosen parameter tends to zero as the noise tends to zero, but slower than the noise level. Here, we shall show such regularization property under two natural assumptions. First, exact penalization must be excluded, and secondly, the discrepancy principle must stop after a finite number of iterations. We conclude this study with a discussion of some consequences for convergence rates obtained by the discrepancy principle under the validity of some kind of variational inequality, a recent tool for the analysis of inverse problems.  相似文献   

5.
We study multi-parameter regularization (multiple penalties) for solving linear inverse problems to promote simultaneously distinct features of the sought-for objects. We revisit a balancing principle for choosing regularization parameters from the viewpoint of augmented Tikhonov regularization, and derive a new parameter choice strategy called the balanced discrepancy principle. A priori and a posteriori error estimates are provided to theoretically justify the principles, and numerical algorithms for efficiently implementing the principles are also provided. Numerical results on deblurring are presented to illustrate the feasibility of the balanced discrepancy principle.  相似文献   

6.
This paper presents an error analysis for classification algorithms generated by regularization schemes with polynomial kernels. Explicit convergence rates are provided for support vector machine (SVM) soft margin classifiers. The misclassification error can be estimated by the sum of sample error and regularization error. The main difficulty for studying algorithms with polynomial kernels is the regularization error which involves deeply the degrees of the kernel polynomials. Here we overcome this difficulty by bounding the reproducing kernel Hilbert space norm of Durrmeyer operators, and estimating the rate of approximation by Durrmeyer operators in a weighted L1 space (the weight is a probability distribution). Our study shows that the regularization parameter should decrease exponentially fast with the sample size, which is a special feature of polynomial kernels. Dedicated to Charlie Micchelli on the occasion of his 60th birthday Mathematics subject classifications (2000) 68T05, 62J02. Ding-Xuan Zhou: The first author is supported partially by the Research Grants Council of Hong Kong (Project No. CityU 103704).  相似文献   

7.
In this paper we establish the error estimates for multi-penalty regularization under the general smoothness assumption in the context of learning theory. One of the motivation for this work is to study the convergence analysis of two-parameter regularization theoretically in the manifold learning setting. In this spirit, we obtain the error bounds for the manifold learning problem using more general framework of multi-penalty regularization. We propose a new parameter choice rule “the balanced-discrepancy principle” and analyze the convergence of the scheme with the help of estimated error bounds. We show that multi-penalty regularization with the proposed parameter choice exhibits the convergence rates similar to single-penalty regularization. Finally on a series of test samples we demonstrate the superiority of multi-parameter regularization over single-penalty regularization.  相似文献   

8.
Nonstationary iterated Tikhonov is an iterative regularization method that requires a strategy for defining the Tikhonov regularization parameter at each iteration and an early termination of the iterative process. A classical choice for the regularization parameters is a decreasing geometric sequence which leads to a linear convergence rate. The early iterations compute quickly a good approximation of the true solution, but the main drawback of this choice is a rapid growth of the error for later iterations. This implies that a stopping criteria, e.g. the discrepancy principle, could fail in computing a good approximation. In this paper we show by a filter factor analysis that a nondecreasing sequence of regularization parameters can provide a rapid and stable convergence. Hence, a reliable stopping criteria is no longer necessary. A geometric nondecreasing sequence of the Tikhonov regularization parameters into a fixed interval is proposed and numerically validated for deblurring problems.  相似文献   

9.
In this paper we study the Browder–Tikhonov regularization method for finding a common solution for a system of nonlinear ill-posed equations with potential, hemicontinuous and monotone mappings in Banach spaces. We give a principle, named quasi-residual, to choose a value of the regularization parameter and an estimate of convergence rates for the regularized solutions.  相似文献   

10.
In this paper, we propose a two-step kernel learning method based on the support vector regression (SVR) for financial time series forecasting. Given a number of candidate kernels, our method learns a sparse linear combination of these kernels so that the resulting kernel can be used to predict well on future data. The L 1-norm regularization approach is used to achieve kernel learning. Since the regularization parameter must be carefully selected, to facilitate parameter tuning, we develop an efficient solution path algorithm that solves the optimal solutions for all possible values of the regularization parameter. Our kernel learning method has been applied to forecast the S&P500 and the NASDAQ market indices and showed promising results.  相似文献   

11.
We investigate a novel adaptive choice rule of the Tikhonov regularization parameter in numerical differentiation which is a classic ill-posed problem. By assuming a general unknown Hölder type error estimate derived for numerical differentiation, we choose a regularization parameter in a geometric set providing a nearly optimal convergence rate with very limited a-priori information. Numerical simulation in image edge detection verifies reliability and efficiency of the new adaptive approach.  相似文献   

12.
In this article we discuss a regularization of semi-discrete ill-posed problem appearing as a result of application of a collocation method to Fredholm integral equation of the first kind. In this context we analyse Tikhonov regularization in Sobolev scales and prove error bounds under general source conditions. Moreover, we study an a posteriori regularization parameter choice by means of the balancing principle.  相似文献   

13.
We consider Tikhonov regularization of linear ill-posed problems with noisy data. The choice of the regularization parameter by classical rules, such as discrepancy principle, needs exact noise level information: these rules fail in the case of an underestimated noise level and give large error of the regularized solution in the case of very moderate overestimation of the noise level. We propose a general family of parameter choice rules, which includes many known rules and guarantees convergence of approximations. Quasi-optimality is proved for a sub-family of rules. Many rules from this family work well also in the case of many times under- or overestimated noise level. In the case of exact or overestimated noise level we propose to take the regularization parameter as the minimum of parameters from the post-estimated monotone error rule and a certain new rule from the proposed family. The advantages of the new rules are demonstrated in extensive numerical experiments.  相似文献   

14.
In this paper, we propose a two-step kernel learning method based on the support vector regression (SVR) for financial time series forecasting. Given a number of candidate kernels, our method learns a sparse linear combination of these kernels so that the resulting kernel can be used to predict well on future data. The L 1-norm regularization approach is used to achieve kernel learning. Since the regularization parameter must be carefully selected, to facilitate parameter tuning, we develop an efficient solution path algorithm that solves the optimal solutions for all possible values of the regularization parameter. Our kernel learning method has been applied to forecast the S&P500 and the NASDAQ market indices and showed promising results.  相似文献   

15.
In this paper we consider a collocation method for solving Fredholm integral equations of the first kind, which is known to be an ill-posed problem. An “unregularized” use of this method can give reliable results in the case when the rate at which smallest singular values of the collocation matrices decrease is known a priori. In this case the number of collocation points plays the role of a regularization parameter. If the a priori information mentioned above is not available, then a combination of collocation with Tikhonov regularization can be the method of choice. We analyze such regularized collocation in a rather general setting, when a solution smoothness is given as a source condition with an operator monotone index function. This setting covers all types of smoothness studied so far in the theory of Tikhonov regularization. One more issue discussed in this paper is an a posteriori choice of the regularization parameter, which allows us to reach an optimal order of accuracy for deterministic noise model without any knowledge of solution smoothness.  相似文献   

16.
陈仲英  宋丽红 《东北数学》2005,21(2):131-134
Many industrial and engineering applications require numerically solving ill-posed problems. Regularization methods are employed to find approximate solutions of these problems. The choice of regularization parameters by numerical algorithms is one of the most important issues for the success of regularization methods. When we use some discrepancy principles to determine the regularization parameter,  相似文献   

17.
In this paper we present a simple modification of the Method of Regularized Stokeslets, which significantly reduces the dependence of the accuracy on the regularization parameter and achieves accurate solutions with low computational effort. Thanks to the modification introduced, the regularization parameter is no longer a free-tuning parameter. This new approach is based on a special treatment of the near-singular kernel evaluations, where spatially averaged quantities are considered. Numerical tests for both exterior and interior Stokes flows are presented, showing both accurate results and good conditioning of the matrix kernel.  相似文献   

18.
In positron emission tomography, image data corresponds to measurements of emitted photons from a radioactive tracer in the subject. Such count data is typically modeled using a Poisson random variable, leading to the use of the negative-log Poisson likelihood fit-to-data function. Regularization is needed, however, in order to guarantee reconstructions with minimal artifacts. Given that tracer densities are primarily smoothly varying, but also contain sharp jumps (or edges), total variation regularization is a natural choice. However, the resulting computational problem is quite challenging. In this paper, we present an efficient computational method for this problem. Convergence of the method has been shown for quadratic regularization functions and here convergence is shown for total variation regularization. We also present three regularization parameter choice methods for use on total variation-regularized negative-log Poisson likelihood problems. We test the computational and regularization parameter selection methods on two synthetic data sets.  相似文献   

19.
Optimal Rates for the Regularized Least-Squares Algorithm   总被引:1,自引:0,他引:1  
We develop a theoretical analysis of the performance of the regularized least-square algorithm on a reproducing kernel Hilbert space in the supervised learning setting. The presented results hold in the general framework of vector-valued functions; therefore they can be applied to multitask problems. In particular, we observe that the concept of effective dimension plays a central role in the definition of a criterion for the choice of the regularization parameter as a function of the number of samples. Moreover, a complete minimax analysis of the problem is described, showing that the convergence rates obtained by regularized least-squares estimators are indeed optimal over a suitable class of priors defined by the considered kernel. Finally, we give an improved lower rate result describing worst asymptotic behavior on individual probability measures rather than over classes of priors.  相似文献   

20.
We analyze the learning rates for the least square regression with data dependent hypothesis spaces and coefficient regularization algorithms based on general kernels. Under a very mild regularity condition on the regression function, we obtain a bound for the approximation error by estimating the corresponding K-functional. Combining this estimate with the previous result of the sample error, we derive a dimensional free learning rate by the proper choice of the regularization parameter.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号