首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
Least squares problems arise frequently in many disciplines such as image restorations. In these areas, for the given least squares problem, usually the coefficient matrix is ill-conditioned. Thus if the problem data are available with certain error, then after solving least squares problem with classical approaches we might end up with a meaningless solution. Tikhonov regularization, is one of the most widely used approaches to deal with such situations. In this paper, first we briefly describe these approaches, then the robust optimization framework which includes the errors in problem data is presented. Finally, our computational experiments on several ill-conditioned standard test problems using the regularization tools, a Matlab package for least squares problem, and the robust optimization framework, show that the latter approach may be the right choice.  相似文献   

2.
《Optimization》2012,61(6):699-716
We study a one-parameter regularization technique for convex optimization problems whose main feature is self-duality with respect to the Legendre–Fenchel conjugation. The self-dual technique, introduced by Goebel, can be defined for both convex and saddle functions. When applied to the latter, we show that if a saddle function has at least one saddle point, then the sequence of saddle points of the regularized saddle functions converges to the saddle point of minimal norm of the original one. For convex problems with inequality and state constraints, we apply the regularization directly on the objective and constraint functions, and show that, under suitable conditions, the associated Lagrangians of the regularized problem hypo/epi-converge to the original Lagrangian, and that the associated value functions also epi-converge to the original one. Finally, we find explicit conditions ensuring that the regularized sequence satisfies Slater's condition.  相似文献   

3.
In many science and engineering applications, the discretization of linear ill-posed problems gives rise to large ill-conditioned linear systems with the right-hand side degraded by noise. The solution of such linear systems requires the solution of minimization problems with one quadratic constraint, depending on an estimate of the variance of the noise. This strategy is known as regularization. In this work, we propose a modification of the Lagrange method for the solution of the noise constrained regularization problem. We present the numerical results of test problems, image restoration and medical imaging denoising. Our results indicate that the proposed Lagrange method is effective and efficient in computing good regularized solutions of ill-conditioned linear systems and in computing the corresponding Lagrange multipliers. Moreover, our numerical experiments show that the Lagrange method is computationally convenient. Therefore, the Lagrange method is a promising approach for dealing with ill-posed problems. This work was supported by the Italian FIRB Project “Parallel algorithms and Nonlinear Numerical Optimization” RBAU01JYPN.  相似文献   

4.
基于混沌粒子群算法的Tikhonov正则化参数选取   总被引:2,自引:0,他引:2  
余瑞艳 《数学研究》2011,44(1):101-106
Tikhonov正则化方法是求解不适定问题最为有效的方法之一,而正则化参数的最优选取是其关键.本文将混沌粒子群优化算法与Tikhonov正则化方法相结合,基于Morozov偏差原理设计粒子群的适应度函数,利用混沌粒子群优化算法的优点,为正则化参数的选取提供了一条有效的途径.数值实验结果表明,本文方法能有效地处理不适定问题,是一种实用有效的方法.  相似文献   

5.
In this paper, we deal with nonlinear ill-posed problems involving m-accretive mappings in Banach spaces. We consider a derivative and inverse free method for the implementation of Lavrentiev regularization method. Using general H¨older type source condition we obtain an optimal order error estimate. Also we consider the adaptive parameter choice strategy proposed by Pereverzev and Schock(2005) for choosing the regularization parameter.  相似文献   

6.
The goal of this paper is to discover some possibilities for applying the proximal point method to nonconvex problems. It can be proved that – for a wide class of problems – proximal regularization performed with appropriate regularization parameters ensures convexity of the auxiliary problems and each accumulation point of the method satisfies the necessary optimality conditions.  相似文献   

7.
This paper presents a homotopy procedure which improves the solvability of mathematical programming problems arising from total variational methods for image denoising. The homotopy on the regularization parameter involves solving a sequence of equality-constrained optimization problems where the positive regularization parameter in each optimization problem is initially large and is reduced to zero. Newton’s method is used to solve the optimization problems and numerical results are presented.  相似文献   

8.
Usually, direct methods are employed for the solution of linear systems arising in the context of optimization. However, motivated by the potential of multiscale refinement schemes for large problems of dynamic state estimation, we investigate in this paper the application of iterative solvers based on the concepts developed in Ref. 1. Specifically, we explore the effect of different system reductions for various Krylov-space iteration methods as well as three concepts of preconditioning. The first one is the normalization of states and outputs, which also favors error analysis. Next, diagonal scale-dependent preconditioners are compared; they all bound the condition numbers independently of the refinement scale, but exhibit significant quantitative differences. Finally, the effect of the regularization parameter on condition numbers and iteration numbers is analyzed. It turns out that a so-called simplified Uzawa scheme with Jacobi preconditioning and suitable regularization parameter is most efficient. The experiments also reveal that further improvements are necessary.  相似文献   

9.
Abstract

Inverse problems of identifying parameters in partial differential equations constitute an important class of problems with diverse real-world applications. These identification problems are commonly explored in an optimization framework and there are many optimization formulations having their own advantages and disadvantages. Although a non-convex output least-squares (OLS) objective is commonly used, a convex-modified output least-squares (MOLS) has shown encouraging results in recent years. In this work, we focus on various aspects of the MOLS approach. We devise a rigorous (quadratic and non-quadratic) regularization framework for the identification of smooth as well as discontinuous coefficients. This framework subsumes the total variation regularization that has attracted a great deal of attention in identifying sharply varying coefficients and also in image processing. We give new existence results for the regularized optimization problems for OLS and MOLS. Restricting to the Tikhonov (quadratic) regularization, we carry out a detailed study of various stability aspects of the inverse problem under data perturbation and give new stability estimates for general inverse problems using OLS and MOLS formulations. We give a discretization scheme for the continuous inverse problem and prove the convergence of the discrete inverse problem to the continuous one. We collect discrete formulas for OLS and MOLS and compute their gradients and Hessians. We present applications of our theoretical results. To show the feasibility of the MOLS framework, we also provide computational results for the inverse problem of identifying parameters in three different classes of partial differential equations .  相似文献   

10.
A new trust region algorithm for image restoration   总被引:1,自引:0,他引:1  
The image restoration problems play an important role in remote sensing and astronomical image analysis. One common method for the recovery of a true image from corrupted or blurred image is the least squares error (LSE) method. But the LSE method is unstable in practical applications. A popular way to overcome instability is the Tikhonov regularization. However, difficulties will encounter when adjusting the so-called regularization parameter a. Moreover, how to truncate the iteration at appropriate steps is also challenging. In this paper we use the trust region method to deal with the image restoration problem, meanwhile, the trust region subproblem is solved by the truncated Lanczos method and the preconditioned truncated Lanczos method. We also develop a fast algorithm for evaluating the Kronecker matrix-vector product when the matrix is banded. The trust region method is very stable and robust, and it has the nice property of updating the trust region automatically. This releases us from tedious fi  相似文献   

11.
We develop a regularization for binary collisions in some restricted 3-body problems moving in planar one-dimensional spaces with constant curvature. The main characteristic of the regularization is that it preserves the Hamiltonian structure of the equations and it regularizes all the binary collisions with just one transformation. We apply this global symplectic regularization to the 2-body problem on the unit circle and we show the global dynamics. Also, we tackle the restricted 3-body problem with one fixed center in the unit circle and we give the global dynamics for the case when it has two fixed centers.  相似文献   

12.
陈仲英  宋丽红 《东北数学》2005,21(2):131-134
Many industrial and engineering applications require numerically solving ill-posed problems. Regularization methods are employed to find approximate solutions of these problems. The choice of regularization parameters by numerical algorithms is one of the most important issues for the success of regularization methods. When we use some discrepancy principles to determine the regularization parameter,  相似文献   

13.
Learning gradients is one approach for variable selection and feature covariation estimation when dealing with large data of many variables or coordinates. In a classification setting involving a convex loss function, a possible algorithm for gradient learning is implemented by solving convex quadratic programming optimization problems induced by regularization schemes in reproducing kernel Hilbert spaces. The complexity for such an algorithm might be very high when the number of variables or samples is huge. We introduce a gradient descent algorithm for gradient learning in classification. The implementation of this algorithm is simple and its convergence is elegantly studied. Explicit learning rates are presented in terms of the regularization parameter and the step size. Deep analysis for approximation by reproducing kernel Hilbert spaces under some mild conditions on the probability measure for sampling allows us to deal with a general class of convex loss functions.  相似文献   

14.
This paper is devoted to solve a backward problem for a time-fractional diffusion equation with variable coefficients in a general bounded domain by the Tikhonov regularization method. Based on the eigenfunction expansion of the solution, the backward problem for searching the initial data is changed to solve a Fredholm integral equation of the first kind. The conditional stability for the backward problem is obtained. We use the Tikhonov regularization method to deal with the integral equation and obtain the series expression of solution. Furthermore, the convergence rates for the Tikhonov regularized solution can be proved by using an a priori regularization parameter choice rule and an a posteriori regularization parameter choice rule. Two numerical examples in one-dimensional and two-dimensional cases respectively are investigated. Numerical results show that the proposed method is effective and stable.  相似文献   

15.
Summary. In the study of the choice of the regularization parameter for Tikhonov regularization of nonlinear ill-posed problems, Scherzer, Engl and Kunisch proposed an a posteriori strategy in 1993. To prove the optimality of the strategy, they imposed many very restrictive conditions on the problem under consideration. Their results are difficult to apply to concrete problems since one can not make sure whether their assumptions are valid. In this paper we give a further study on this strategy, and show that Tikhonov regularization is order optimal for each with the regularization parameter chosen according to this strategy under some simple and easy-checking assumptions. This paper weakens the conditions needed in the existing results, and provides a theoretical guidance to numerical experiments. Received August 8, 1997 / Revised version received January 26, 1998  相似文献   

16.
In this paper, we consider a backward problem for a time-fractional diffusion equation with variable coefficients in a general bounded domain. That is to determine the initial data from a noisy final data. We propose a quasi-boundary value regularization method combined with an a posteriori regularization parameter choice rule to deal with the backward problem and give the corresponding convergence estimate.  相似文献   

17.
Tikhonov regularization is one of the most popular methods for solving linear systems of equations or linear least-squares problems with a severely ill-conditioned matrix A. This method replaces the given problem by a penalized least-squares problem. The present paper discusses measuring the residual error (discrepancy) in Tikhonov regularization with a seminorm that uses a fractional power of the Moore-Penrose pseudoinverse of AA T as weighting matrix. Properties of this regularization method are discussed. Numerical examples illustrate that the proposed scheme for a suitable fractional power may give approximate solutions of higher quality than standard Tikhonov regularization.  相似文献   

18.
该文研究了一个热源识别问题,通过引入修正吉洪诺夫方法来处理问题的不适定性,在一种先验和一种后验参数选取准则下,分别获得了问题的误差估计.数值例子进一步验证了方法的有效性和稳定性.  相似文献   

19.
S. Dempe  P. Mehlitz 《Optimization》2018,67(6):737-756
In this article, we consider bilevel optimization problems with discrete lower level and continuous upper level problems. Taking into account both approaches (optimistic and pessimistic) which have been developed in the literature to deal with this type of problem, we derive some conditions for the existence of solutions. In the case where the lower level is a parametric linear problem, the bilevel problem is transformed into a continuous one. After that, we are able to discuss local optimality conditions using tools of variational analysis for each of the different approaches. Finally, we consider a simple application of our results namely the bilevel programming problem with the minimum spanning tree problem in the lower level.  相似文献   

20.
We deal with a generalization of the proximal-point method and the closely related Tikhonov regularization method for convex optimization problems. The prime motivation behind this is the well-known connection between the classical proximal-point and augmented Lagrangian methods, and the emergence of modified augmented Lagrangian methods in recent years. Our discussion includes a formal proof of a corresponding connection between the generalized proximal-point method and the modified augmented Lagrange approach in infinite dimensions. Several examples and counterexamples illustrate the convergence properties of the generalized proximal-point method and indicate that the corresponding assumptions are sharp.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号