首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Steepest descent preconditioning is considered for the recently proposed nonlinear generalized minimal residual (N‐GMRES) optimization algorithm for unconstrained nonlinear optimization. Two steepest descent preconditioning variants are proposed. The first employs a line search, whereas the second employs a predefined small step. A simple global convergence proof is provided for the N‐GMRES optimization algorithm with the first steepest descent preconditioner (with line search), under mild standard conditions on the objective function and the line search processes. Steepest descent preconditioning for N‐GMRES optimization is also motivated by relating it to standard non‐preconditioned GMRES for linear systems in the case of a standard quadratic optimization problem with symmetric positive definite operator. Numerical tests on a variety of model problems show that the N‐GMRES optimization algorithm is able to very significantly accelerate convergence of stand‐alone steepest descent optimization. Moreover, performance of steepest‐descent preconditioned N‐GMRES is shown to be competitive with standard nonlinear conjugate gradient and limited‐memory Broyden–Fletcher–Goldfarb–Shanno methods for the model problems considered. These results serve to theoretically and numerically establish steepest‐descent preconditioned N‐GMRES as a general optimization method for unconstrained nonlinear optimization, with performance that appears promising compared with established techniques. In addition, it is argued that the real potential of the N‐GMRES optimization framework lies in the fact that it can make use of problem‐dependent nonlinear preconditioners that are more powerful than steepest descent (or, equivalently, N‐GMRES can be used as a simple wrapper around any other iterative optimization process to seek acceleration of that process), and this potential is illustrated with a further application example. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, we present a new gradient method for linear and nonlinear ill-posed problems F(x) = y. Combined with the discrepancy principle as stopping rule it is a regularization method that yields convergence to an exact solution if the operator F satisfies a tangential cone condition. If the exact solution satisfies smoothness conditions, then even convergence rates can be proven. Numerical results show that the new method in most cases needs less iteration steps than Landweber iteration, the steepest descent or minimal error method.  相似文献   

3.
Summary For the solution of linear ill-posed problems some gradient methods like conjugate gradients and steepest descent have been examined previously in the literature. It is shown that even though these methods converge in the case of exact data their instability makes it impossible to base a-priori parameter choice regularization methods upon them.  相似文献   

4.
This paper describes some sufficient conditions for global convergence in five differential equation algorithms for solving systems of non-linear algebraic equations involving positive variables. The algorithms are continuous time versions of a modified steepest descent method, Newton's method, a modified Newton's method and two algorithms using the transpose of the Jacobian in place of the inverse of the Jacobian in Newton's method. It is shown that under a set of mildly restrictive conditions the Jacobian transpose algorithm has qualitatively the same convergence as Newton's method.  相似文献   

5.
《Optimization》2012,61(9):1203-1226
This article presents a differential inclusion-based neural network for solving nonsmooth convex programming problems with inequality constraints. The proposed neural network, which is modelled with a differential inclusion, is a generalization of the steepest descent neural network. It is proved that the set of the equilibrium points of the proposed differential inclusion is equal to that of the optimal solutions of the considered optimization problem. Moreover, it is shown that the trajectory of the solution converges to an element of the optimal solution set and the convergence point is a globally asymptotically stable point of the proposed differential inclusion. After establishing the theoretical results, an algorithm is also designed for solving such problems. Typical examples are given which confirm the effectiveness of the theoretical results and the performance of the proposed neural network.  相似文献   

6.
牛顿-正则化方法与一类差分方程反问题的求解   总被引:7,自引:0,他引:7  
宋华  刘家琦 《计算数学》1990,12(3):225-231
在用牛顿迭代法求解非线性算子方程时,总要求非线性算子的导算子是有界可逆的,即线性化方程是适定的.但在实际数值计算中.即使满足这个条件,也可能出现数值不稳定的现象.为了克服这个困难,[1]将牛顿法与求解线性不适定问题的BG方法(平均核方法)结合起来,在每一步迭代中利用BG方法稳定求解.考虑到Tikhonov的正则化方  相似文献   

7.
Recently, it has been observed that several nondifferentiable minimization problems share the property that the question of whether a given point is optimal can be answered by solving a certain bounded least squares problem. If the resulting residual vector,r, vanishes then the current point is optimal. Otherwise,r is a descent direction. In fact, as we shall see,r points at the steepest descent direction. On the other hand, it is customary to characterize the optimality conditions (and the steepest descent vector) of a convex nondifferentiable function via its subdifferential. Also, it is well known that optimality conditions are usually related to theorems of the alternative. One aim of our survey is to clarify the relations between these subjects. Another aim is to introduce a new type of theorems of the alternative. The new theorems characterize the optimality conditions of discretel 1 approximation problems and multifacility location problems, and provide a simple way to obtain the subdifferential and the steepest descent direction in such problems. A further objective of our review is to demonstrate that the ability to compute the steepest descent direction at degenerate dead points opens a new way for handling degeneracy in active set methods.  相似文献   

8.
The Conjugate Gradient (CG) method and the Conjugate Residual (CR) method are Krylov subspace methods for solving symmetric (positive definite) linear systems. To solve nonsymmetric linear systems, the Bi-Conjugate Gradient (Bi-CG) method has been proposed as an extension of CG. Bi-CG has attractive short-term recurrences, and it is the basis for the successful variants such as Bi-CGSTAB. In this paper, we extend CR to nonsymmetric linear systems with the aim of finding an alternative basic solver. Numerical experiments show that the resulting algorithm with short-term recurrences often gives smoother convergence behavior than Bi-CG. Hence, it may take the place of Bi-CG for the successful variants.  相似文献   

9.
Tom Lahmer 《PAMM》2007,7(1):2040015-2040016
An efficient solution of the inverse problem of identifying nonlinear dependencies in hyperbolic systems of PDEs, here piezoelectric material parameter curves, is the aim of this work. The dominant material tensor entries in the coupled field equations which describe the electromechanical interplay are approximated by functions depending on the physical field quantities electric field or mechanical stress. In order to solve this nonlinear and ill-posed problem of parameter curve identification efficiently, modified Landweber iterations (steepest descent and minimal error) will be studied. A multilevel approach is expedient due to the discretization of the unknown parameter curves and high computational efforts solving the forward problem (transient, nonlinear FEM computations). Theoretical investigations concerning convergence and regularization properties of the methods in a multilevel scenario will be presented, along with numerical results from an example in piezoelectricity. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

10.
This paper is concerned with the development of a parameter-free method, closely related to penalty function and multiplier methods, for solving constrained minimization problems. The method is developed via the quadratic programming model with equality constraints. The study starts with an investigation into the convergence properties of a so-called “primal-dual differential trajectory”, defined by directions given by the direction of steepest descent with respect to the variables x of the problem, and the direction of steepest ascent with respect to the Lagrangian multipliers λ, associated with the Lagrangian function. It is shown that the trajectory converges to a stationary point (x*, λ*) corresponding to the solution of the equality constrained problem. Subsequently numerical procedures are proposed by means of which practical trajectories may be computed and the convergence of these trajectories are analyzed. A computational algorithm is presented and its application is illustrated by means of simple but representative examples. The extension of the method to inequality constrained problems is discussed and a non-rigorous argument, based on the Kuhn—Tucker necessary conditions for a constrained minimum, is put forward on which a practical procedure for determining the solution is based. The application of the method to inequality constrained problems is illustrated by its application to a couple of simple problems.  相似文献   

11.
Two projected gradient algorithms for linear programming are discussed. The first uses a conventional enough steepest edge approach, and implements a version of Wolfe's method for resolving any problems of degeneracy. The second makes use of a steepest descent direction which is calculated by solving a linear least squares problem subject to bound constraints, and avoids problems of degeneracy altogether. They are compared using randomly generated problems which permit the specification of (known) degenerate minima. The results appear to favour the more conventional method as problem sizes get larger.  相似文献   

12.
In this paper we develop two multilevel iteration methods for solving linear systems resulting from the Galerkin method and Tikhonov regularization for linear ill-posed problems. The two algorithms and their convergence analyses are presented in an abstract framework.  相似文献   

13.
Gradient-type iterative methods for solving Hermitian eigenvalue problems can be accelerated by using preconditioning and deflation techniques. A preconditioned steepest descent iteration with implicit deflation (PSD-id) is one of such methods. The convergence behavior of the PSD-id is recently investigated based on the pioneering work of Samokish on the preconditioned steepest descent method (PSD). The resulting non-asymptotic estimates indicate a superlinear convergence of the PSD-id under strong assumptions on the initial guess. The present paper utilizes an alternative convergence analysis of the PSD by Neymeyr under much weaker assumptions. We embed Neymeyr's approach into the analysis of the PSD-id using a restricted formulation of the PSD-id. More importantly, we extend the new convergence analysis of the PSD-id to a practically preferred block version of the PSD-id, or BPSD-id, and show the cluster robustness of the BPSD-id. Numerical examples are provided to validate the theoretical estimates.  相似文献   

14.
A modification of Karmarkar's algorithm for linear programming successively generates column-scaled equivalents of the original linear program, and in the scaled coordinates obtains improvements according to steepest descent directions. As an interior-feasible-preserving algorithm, termination requires a purification algorithm to obtain a dual basic optimal solution. Together the two algorithms comprise a ‘hybrid’ procedure for solving linear programs.In this paper we present some convergence results on the Phase II and Phase I portions of the scaling algorithm. We also present results of numerical experiments on examples of Klee—Minty type which show sensitivity to the starting interior-feasible point and steepest descent step size.  相似文献   

15.
Abstract

We propose and analyze a family of successive projection methods whose step direction is the same as the Landweber method for solving nonlinear ill-posed problems that satisfy the Tangential Cone Condition (TCC). This family encompasses the Landweber method, the minimal error method, and the steepest descent method; thus, providing an unified framework for the analysis of these methods. Moreover, we define new methods in this family, which are convergent for the constant of the TCC in a range twice as large as the one required for the Landweber and other gradient type methods. The TCC is widely used in the analysis of iterative methods for solving nonlinear ill-posed problems. The key idea in this work is to use the TCC in order to construct special convex sets possessing a separation property, and to successively project onto these sets. Numerical experiments are presented for a nonlinear two-dimensional elliptic parameter identification problem, validating the efficiency of our method.  相似文献   

16.
The paper presents convergence estimates for a class of iterative methods for solving partial generalized symmetric eigenvalue problems whereby a sequence of subspaces containing approximations to eigenvectors is generated by combining the Rayleigh-Ritz and the preconditioned steepest descent/ascent methods. The paper uses a novel approach of studying the convergence of groups of eigenvalues, rather than individual ones, to obtain new convergence estimates for this class of methods that are cluster robust, i.e. do not involve distances between computed eigenvalues.  相似文献   

17.
A NEW STEPSIZE FOR THE STEEPEST DESCENT METHOD   总被引:8,自引:0,他引:8  
The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by Barzilar and Borwein, which is proved to be superlinearly convergent for convex quadratic in two dimensional space, and performs quite well for high dimensional problems. The BB method is not monotone, thus it is not easy to be generalized for general nonlinear functions unless certain non-monotone techniques being applied. Therefore, it is very desirable to find stepsize formulae which enable fast convergence and possess the monotone property. Such a stepsize αk for the steepest descent method is suggested in this paper. An algorithm with this new stepsize in even iterations and exact line search in odd iterations is proposed. Numerical results are presented, which confirm that the new method can find the exact solution within 3 iteration for two dimensional problems. The new method is very efficient for small scale problems. A modified version of the new method is also presented, where the new technique for selecting the stepsize is used after every two exact line searches. The modified algorithm is comparable to the Barzilar-Borwein method for large scale problems and better for small scale problems.  相似文献   

18.
MULTILEVEL ITERATION METHODS FOR SOLVING LINEAR ILL-POSED PROBLEMS   总被引:1,自引:0,他引:1  
In this paper we develop multilevel iteration methods for solving linear systems resulting from the Galerkin method and Tikhonov regularization for ill-posed problems, The algorithm and its convergence analysis ave presented in an abstract framework.  相似文献   

19.
Regularization of ill-posed linear inverse problems via ? 1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ? 1 penalized functional is via an iterative soft-thresholding algorithm. We propose an alternative implementation to ? 1-constraints, using a gradient method, with projection on ? 1-balls. The corresponding algorithm uses again iterative soft-thresholding, now with a variable thresholding parameter. We also propose accelerated versions of this iterative method, using ingredients of the (linear) steepest descent method. We prove convergence in norm for one of these projected gradient methods, without and with acceleration.  相似文献   

20.
微分连续正则化方法与一维声波方程系数反演问题求解   总被引:4,自引:0,他引:4  
本文将求解非线性方程组的微分连续法与求解不问题的Tikhonov正则化方法结合起来,形成了兼有大范围收性与稳定性的“微分连续正则化方法”,给出了方法的收敛性分析,文中最后给出的关于一维波动方程反问题的计算实例表明了方法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号