首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Steepest Descent, CG, and Iterative Regularization of Ill-Posed Problems   总被引:3,自引:1,他引:2  
The state of the art iterative method for solving large linear systems is the conjugate gradient (CG) algorithm. Theoretical convergence analysis suggests that CG converges more rapidly than steepest descent. This paper argues that steepest descent may be an attractive alternative to CG when solving linear systems arising from the discretization of ill-posed problems. Specifically, it is shown that, for ill-posed problems, steepest descent has a more stable convergence behavior than CG, which may be explained by the fact that the filter factors for steepest descent behave much less erratically than those for CG. Moreover, it is shown that, with proper preconditioning, the convergence rate of steepest descent is competitive with that of CG.This revised version was published online in October 2005 with corrections to the Cover Date.  相似文献   

2.
Steepest descent preconditioning is considered for the recently proposed nonlinear generalized minimal residual (N‐GMRES) optimization algorithm for unconstrained nonlinear optimization. Two steepest descent preconditioning variants are proposed. The first employs a line search, whereas the second employs a predefined small step. A simple global convergence proof is provided for the N‐GMRES optimization algorithm with the first steepest descent preconditioner (with line search), under mild standard conditions on the objective function and the line search processes. Steepest descent preconditioning for N‐GMRES optimization is also motivated by relating it to standard non‐preconditioned GMRES for linear systems in the case of a standard quadratic optimization problem with symmetric positive definite operator. Numerical tests on a variety of model problems show that the N‐GMRES optimization algorithm is able to very significantly accelerate convergence of stand‐alone steepest descent optimization. Moreover, performance of steepest‐descent preconditioned N‐GMRES is shown to be competitive with standard nonlinear conjugate gradient and limited‐memory Broyden–Fletcher–Goldfarb–Shanno methods for the model problems considered. These results serve to theoretically and numerically establish steepest‐descent preconditioned N‐GMRES as a general optimization method for unconstrained nonlinear optimization, with performance that appears promising compared with established techniques. In addition, it is argued that the real potential of the N‐GMRES optimization framework lies in the fact that it can make use of problem‐dependent nonlinear preconditioners that are more powerful than steepest descent (or, equivalently, N‐GMRES can be used as a simple wrapper around any other iterative optimization process to seek acceleration of that process), and this potential is illustrated with a further application example. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
主要研究对称正定矩阵群上的内蕴最速下降算法的收敛性问题.首先针对一个可转化为对称正定矩阵群上无约束优化问题的半监督度量学习模型,提出对称正定矩阵群上一种自适应变步长的内蕴最速下降算法.然后利用李群上的光滑函数在任意一点处带积分余项的泰勒展开式,证明所提算法在对称正定矩阵群上是线性收敛的.最后通过在分类问题中的数值实验说明算法的有效性.  相似文献   

4.
Piecewise affine functions arise from Lagrangian duals of integer programming problems, and optimizing them provides good bounds for use in a branch and bound method. Methods such as the subgradient method and bundle methods assume only one subgradient is available at each point, but in many situations there is more information available. We present a new method for optimizing such functions, which is related to steepest descent, but uses an outer approximation to the subdifferential to avoid some of the numerical problems with the steepest descent approach. We provide convergence results for a class of outer approximations, and then develop a practical algorithm using such an approximation for the compact dual to the linear programming relaxation of the uncapacitated facility location problem. We make a numerical comparison of our outer approximation method with the projection method of Conn and Cornuéjols, and the bundle method of Schramm and Zowe. Received September 10, 1998 / Revised version received August 1999?Published online December 15, 1999  相似文献   

5.
An algorithm is presented that minimizes a continuously differentiable function in several variables subject to linear inequality constraints. At each step of the algorithm an arc is generated along which a move is performed until either a point yielding a sufficient descent in the function value is determined or a constraint boundary is encountered. The decision to delite a constraint from the list of active constraints is based upon periodic estimates of the Kuhn-Tucker multipliers. The curvilinear search paths are obtained by solving a linear approximation to the differential equation of the continuous steepest descent curve for the objective function on the equality constrained region defined by the constraints which are required to remain binding. If the Hessian matrix of the objective function has certain properties and if the constraint gradients are linearly independent, the sequence generated by the algorithm converges to a point satisfying the Kuhn-Tucker optimality conditions at a rate that is at least quadratic.  相似文献   

6.
In this paper we present certain characteristic conditions for the convergence of the generalized steepest descent approximation process to a zero of a generalized strongly accretive operator, defined on a uniformly smooth Banach space. Our study is based on an important result of Reich [S. Reich, An iterative procedure for constructing zeros of accretive sets in Banach spaces, Nonlinear Anal. 2 (1978) 85–92] and given results extend and improve some of the earlier results which include the steepest descent approximation method.  相似文献   

7.
Molecular similarity index measures the similarity between two molecules. Computing the optimal similarity index is a hard global optimization problem. Since the objective function value is very hard to compute and its gradient vector is usually not available, previous research has been based on non-gradient algorithms such as random search and the simplex method. In a recent paper, McMahon and King introduced a Gaussian approximation so that both the function value and the gradient vector can be computed analytically. They then proposed a steepest descent algorithm for computing the optimal similarity index of small molecules. In this paper, we consider a similar problem. Instead of computing atom-based derivatives, we directly compute the derivatives with respect to the six free variables describing the relative positions of the two molecules.. We show that both the function value and gradient vector can be computed analytically and apply the more advanced BFGS method in addition to the steepest descent algorithm. The algorithms are applied to compute the similarities among the 20 amino acids and biomolecules like proteins. Our computational results show that our algorithm can achieve more accuracy than previous methods and has a 6-fold speedup over the steepest descent method.  相似文献   

8.
Computational Optimization and Applications - We prove some new results about the asymptotic behavior of the steepest descent algorithm for general quadratic functions. Some well-known results of...  相似文献   

9.
Two projected gradient algorithms for linear programming are discussed. The first uses a conventional enough steepest edge approach, and implements a version of Wolfe's method for resolving any problems of degeneracy. The second makes use of a steepest descent direction which is calculated by solving a linear least squares problem subject to bound constraints, and avoids problems of degeneracy altogether. They are compared using randomly generated problems which permit the specification of (known) degenerate minima. The results appear to favour the more conventional method as problem sizes get larger.  相似文献   

10.
A NEW STEPSIZE FOR THE STEEPEST DESCENT METHOD   总被引:8,自引:0,他引:8  
The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by Barzilar and Borwein, which is proved to be superlinearly convergent for convex quadratic in two dimensional space, and performs quite well for high dimensional problems. The BB method is not monotone, thus it is not easy to be generalized for general nonlinear functions unless certain non-monotone techniques being applied. Therefore, it is very desirable to find stepsize formulae which enable fast convergence and possess the monotone property. Such a stepsize αk for the steepest descent method is suggested in this paper. An algorithm with this new stepsize in even iterations and exact line search in odd iterations is proposed. Numerical results are presented, which confirm that the new method can find the exact solution within 3 iteration for two dimensional problems. The new method is very efficient for small scale problems. A modified version of the new method is also presented, where the new technique for selecting the stepsize is used after every two exact line searches. The modified algorithm is comparable to the Barzilar-Borwein method for large scale problems and better for small scale problems.  相似文献   

11.
In this paper, we present a new algorithm for computing local extrema by modifying and combining algorithms in symbolic and numerical computation. This new algorithm improves the classical steepest descent method that may not terminate, by combining a Sturm’s theorem based separation method and a sufficient condition on infeasibility. In addition, we incorporate a grid subdivision method into our algorithm to approximate all local extrema. The complexity of our algorithm is polynomial in a newly defined condition number, and singly exponential in the number of variables.  相似文献   

12.
A recent work of Shi (Numer. Linear Algebra Appl. 2002; 9 : 195–203) proposed a hybrid algorithm which combines a primal‐dual potential reduction algorithm with the use of the steepest descent direction of the potential function. The complexity of the potential reduction algorithm remains valid but the overall computational cost can be reduced. In this paper, we make efforts to further reduce the computational costs. We notice that in order to obtain the steepest descent direction of the potential function, the Hessian matrix of second order partial derivatives of the objective function needs to be computed. To avoid this, we in this paper propose another hybrid algorithm which uses a projected steepest descent direction of the objective function instead of the steepest descent direction of the potential function. The complexity of the original potential reduction algorithm still remains valid but the overall computational cost is further reduced. Our numerical experiments are also reported. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

13.
Z. Akbari 《Optimization》2017,66(9):1519-1529
In this paper, we present a nonsmooth trust region method for solving linearly constrained optimization problems with a locally Lipschitz objective function. Using the approximation of the steepest descent direction, a quadratic approximation of the objective function is constructed. The null space technique is applied to handle the constraints of the quadratic subproblem. Next, the CG-Steihaug method is applied to solve the new approximation quadratic model with only the trust region constraint. Finally, the convergence of presented algorithm is proved. This algorithm is implemented in the MATLAB environment and the numerical results are reported.  相似文献   

14.
给出图像分割的一种新算法——BB算法.该方法的优点在于利用迭代过程中当前点和前一点的信息确定搜索步长,从而更有效地搜索最优解.为此,首先通过变分水平集方法将CV模型转化为最优化问题;其次,将BB算法引入该优化问题进行求解;然后,对BB算法进行收敛性分析,为该算法应用在CV模型中提供了理论依据;最后将该方法与已有的最速下降法、共轭梯度法的分割结果进行比较.结果表明,跟其他两种方法相比,BB算法在保证较好分割效果的前提下,提高了算法的速度和性能.  相似文献   

15.
The purpose of this paper is to introduce iterative algorithm which is a combination of hybrid viscosity approximation method and the hybrid steepest‐descent method for solving proximal split feasibility problems and obtain the strong convergence of the sequences generated by the iterative scheme under certain weaker conditions in Hilbert spaces. Our results improve many recent results on the topic in the literature. Several numerical experiments are presented to illustrate the effectiveness of our proposed algorithm, and these numerical results show that our result is computationally easier and faster than previously known results on proximal split feasibility problem.  相似文献   

16.
Methods related to Wolfe's recursive method for the resolution of degeneracy in linear programming are discussed, and a nonrecursive variant which works with probability one suggested. Numerical results for both nondegenerate problems and problems constructed to have degenerate optima are reported. These are obtained using a careful implementation of the projected gradient algorithm [11]. These are compared with results obtained using a steepest descent approach which can be implemented by means of a closely related projected gradient method, and which is not affected by degeneracy in principle. However, the problem of correctly identifying degenerate active sets occurs with both algorithms. The numerical results favour the more standard projected gradient procedure which resolves the degeneracy explicitly. Extension of both methods to general polyhedral convex function minimization problems is sketched.  相似文献   

17.
We propose a variant of the numerical method of steepest descent for oscillatory integrals by using a low-cost explicit polynomial approximation of the paths of steepest descent. A loss of asymptotic order is observed, but in the most relevant cases the overall asymptotic order remains higher than a truncated asymptotic expansion at similar computational effort. Theoretical results based on number theory underpinning the mechanisms behind this effect are presented.  相似文献   

18.
Relaxed Steepest Descent and Cauchy-Barzilai-Borwein Method   总被引:6,自引:0,他引:6  
The negative gradient direction to find local minimizers has been associated with the classical steepest descent method which behaves poorly except for very well conditioned problems. We stress out that the poor behavior of the steepest descent methods is due to the optimal Cauchy choice of steplength and not to the choice of the search direction. We discuss over and under relaxation of the optimal steplength. In fact, we study and extend recent nonmonotone choices of steplength that significantly enhance the behavior of the method. For a new particular case (Cauchy-Barzilai-Borwein method), we present a convergence analysis and encouraging numerical results to illustrate the advantages of using nonmonotone overrelaxations of the gradient method.  相似文献   

19.
We propose a new monotone algorithm for unconstrained optimization in the frame of Barzilai and Borwein (BB) method and analyze the convergence properties of this new descent method. Motivated by the fact that BB method does not guarantee descent in the objective function at each iteration, but performs better than the steepest descent method, we therefore attempt to find stepsize formula which enables us to approximate the Hessian based on the Quasi-Cauchy equation and possess monotone property in each iteration. Practical insights on the effectiveness of the proposed techniques are given by a numerical comparison with the BB method.  相似文献   

20.
赋范线性空间中强增生算子方程的迭代解   总被引:1,自引:0,他引:1  
在去掉|(1-A)xn|有界的条件下,在实赋范线性空间中研究了强增生映象零点的最速下降法的迭代逼近问题,从而改进和发展了近期的相关结果.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号