首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
一个修正HS共轭梯度法及其收敛性   总被引:2,自引:0,他引:2  
It is well-known that the direction generated by Hestenes-Stiefel (HS) conjugate gradient method may not be a descent direction for the objective function. In this paper, we take a little modification to the HS method, then the generated direction always satisfies the sufficient descent condition. An advantage of the modified Hestenes-Stiefel (MHS) method is that the scalar βkH Sffikeeps nonnegative under the weak Wolfe-Powell line search. The global convergence result of the MHS method is established under some mild conditions. Preliminary numerical results show that the MHS method is a little more efficient than PRP and HS methods.  相似文献   

2.
In this paper,a modified version of Powell-Zangwill's method for function minimizationwithout calculating derivatives is proposed.The new method possesses following properties:quadratic termination,global convergence for strictly convex function and Q-linearconvergence rate for uniformly convex function.Furthermore,the main part of this paperis to show that the rate of convergence of the new method is quadratic for every n(2n 1)line searches if the objective function is a uniformly convex and suitably smooth functionon R~n.  相似文献   

3.
In this paper, we propose a globally convergent Polak-Ribiere-Polyak (PRP) conjugate gradient method for nonconvex minimization of differentiable functions by employing an Armijo-type line search which is simpler and less demanding than those defined in [4,10]. A favorite property of this method is that we can choose the initial stepsize as the one-dimensional minimizer of a quadratic modelΦ(t):= f(xk) tgkTdk (1/2) t2dkTQkdk, where Qk is a positive definite matrix that carries some second order information of the objective function f. So, this line search may make the stepsize tk more easily accepted. Preliminary numerical results show that this method is efficient.  相似文献   

4.
A new adaptive subspace minimization three-term conjugate gradient algorithm with nonmonotone line search is introduced and analyzed in this paper.The search directions are computed by minimizing a quadratic approximation of the objective function on special subspaces,and we also proposed an adaptive rule for choosing different searching directions at each iteration.We obtain a significant conclusion that the each choice of the search directions satisfies the sufficient descent condition.With the used nonmonotone line search,we prove that the new algorithm is globally convergent for general nonlinear functions under some mild assumptions.Numerical experiments show that the proposed algorithm is promising for the given test problem set.  相似文献   

5.
In this paper, we present a nonmonotone algorithm for solving nonsmooth composite optimization problems. The objective function of these problems is composited by a nonsmooth convex function and a differentiable function. The method generates the search directions by solving quadratic programming successively, and makes use of the nonmonotone line search instead of the usual Armijo-type line search. Global convergence is proved under standard assumptions. Numerical results are given.  相似文献   

6.
In this paper, a modified formula for βk^PRP is proposed for the conjugate gradient method of solving unconstrained optimization problems. The value of βk^PRP keeps nonnegative independent of the line search. Under mild conditions, the global convergence of modified PRP method with the strong Wolfe-Powell line search is established. Preliminary numerical results show that the modified method is efficient.  相似文献   

7.
Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameter in the search directions. In this note, conditions are given on the parameter in the conjugate gradient directions to ensure the descent property of the search directions. Global convergence of such a class of methods is discussed. It is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continuously differentiable function with a modification of the Curry-Altman‘s step-size rule and a bounded level set. Combining PR method with our new method, PR method is modified to have global convergence property.Numerical experiments show that the new methods are efficient by comparing with FR conjugate gradient method.  相似文献   

8.
The main purpose of this paper is to provide a restarting direction for improving on the standard conjugate gradient method.If a drastic non-quadratic behaviour of the objective function is observed in the neighbour of xk,then a restart should be done.The scaling symmetric rank-one update with Davidon's optimal criterion is applied to generate the restarting direction.It is proved that the conjugate gradient method with this strategy retains the quadratic termination.Numerical experiments show that it is successful.  相似文献   

9.
In this paper,we propose a new nonmonotone trust region Barzilai-Borwein(BB for short)method for solving unconstrained optimization problems.The proposed method is given by a novel combination of a modified Metropolis criterion,BB-stepsize and trust region method.The new method uses the reciprocal of BB-stepsize to approximate the Hessian matrix of the objective function in the trust region subproblems,and accepts some bad solutions according to the modified Metropolis criterion based on simulated annealing idea.Under some suitable assumptions,the global convergence of the new method is established.Some preliminary numerical results indicate that,the new method is more efficient compared with the existing trust region BB method.  相似文献   

10.
In this paper, a modified limited memory BFGS method for solving large-scale unconstrained optimization problems is proposed. A remarkable feature of the proposed method is that it possesses global convergence property without convexity assumption on the objective function. Under some suitable conditions, the global convergence of the proposed method is proved. Some numerical results are reported which illustrate that the proposed method is efficient.  相似文献   

11.
In this paper, we are concerned with the conjugate gradient methods for solving unconstrained optimization problems. It is well-known that the direction generated by a conjugate gradient method may not be a descent direction of the objective function. In this paper, we take a little modification to the Fletcher–Reeves (FR) method such that the direction generated by the modified method provides a descent direction for the objective function. This property depends neither on the line search used, nor on the convexity of the objective function. Moreover, the modified method reduces to the standard FR method if line search is exact. Under mild conditions, we prove that the modified method with Armijo-type line search is globally convergent even if the objective function is nonconvex. We also present some numerical results to show the efficiency of the proposed method.Supported by the 973 project (2004CB719402) and the NSF foundation (10471036) of China.  相似文献   

12.
In this paper, by the use of the project of the PRP (Polak–Ribiére–Polyak) conjugate gradient direction, we develop a PRP-based descent method for solving unconstrained optimization problem. The method provides a sufficient descent direction for the objective function. Moreover, if exact line search is used, the method reduces to the standard PRP method. Under suitable conditions, we show that the method with some backtracking line search or the generalized Wolfe-type line search is globally convergent. We also report some numerical results and compare the performance of the method with some existing conjugate gradient methods. The results show that the proposed method is efficient.  相似文献   

13.
Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy. Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter included in the method.  相似文献   

14.
This note describes a modified search strategy for use with a Gauss-Newton method for nonlinear least-squares problems. If a standard line search along the Gauss-Newton vector p is unable to make much progress, a new search direction is constructed which lies in the plane of p and the steepest-descent vector. Numerical experiments show that a quadratic model of the objective function in this plane can yield effective corrections when the basic Gauss-Newton technique experiences difficulty.  相似文献   

15.
A new subspace minimization conjugate gradient algorithm with a nonmonotone Wolfe line search is proposed and analyzed. In the scheme, we propose two choices of the search direction by minimizing a quadratic approximation of the objective function in special subspaces, and state criterions on how to choose the direction. Under given conditions, we obtain the significant conclusion that each choice of the direction satisfies the sufficient descent property. Based on the idea on how the function is close to a quadratic function, a new strategy for choosing the initial stepsize is presented for the line search. With the used nonmonotone Wolfe line search, we prove the global convergence of the proposed method for general nonlinear functions under mild assumptions. Numerical comparisons are given with well-known CGOPT and CG_DESCENT and show that the proposed algorithm is very promising.  相似文献   

16.
一类非单调修正PRP算法的全局收敛性   总被引:1,自引:0,他引:1  
易芳 《经济数学》2006,23(1):99-103
本文给出一类非单调线性搜索下的修正PRP算法,该方法保证每次迭代中的搜索方向是充分下降的.在较弱的条件下,我们证明了此类非单调修正PRP算法具有全局收敛性.  相似文献   

17.
** Email: zl606{at}tom.com*** Corresponding author. Email: weijunzhou{at}126.com**** Email: dhli{at}hnu.cn In this paper, we propose a modified Polak–Ribière–Polyak(PRP) conjugate gradient method. An attractive property of theproposed method is that the direction generated by the methodis always a descent direction for the objective function. Thisproperty is independent of the line search used. Moreover, ifexact line search is used, the method reduces to the ordinaryPRP method. Under appropriate conditions, we show that the modifiedPRP method with Armijo-type line search is globally convergent.We also present extensive preliminary numerical experimentsto show the efficiency of the proposed method.  相似文献   

18.
PSB (Powell-Symmetric-Broyden) algorithm is a very important algorithm and has been extensively used in trust region methods. However, there are few studies on the line search type PSB algorithm. The primary reason is that the direction generated by this class of algorithms is not necessarily a descent direction of the objective function. In this paper, by combining a nonmonotone line search technique with the PSB method, we propose a nonmonotone PSB algorithm for solving unconstrained optimization. Under proper conditions, we establish global convergence and superlinear convergence of the proposed algorithm. At the same time we verify the efficiency of the proposed algorithm by some numerical experiments.  相似文献   

19.
On the Nonmonotone Line Search   总被引:10,自引:0,他引:10  
The technique of nonmonotone line search has received many successful applications and extensions in nonlinear optimization. This paper provides some basic analyses of the nonmonotone line search. Specifically, we analyze the nonmonotone line search methods for general nonconvex functions along different lines. The analyses are helpful in establishing the global convergence of a nonmonotone line search method under weaker conditions on the search direction. We explore also the relations between nonmonotone line search and R-linear convergence assuming that the objective function is uniformly convex. In addition, by taking the inexact Newton method as an example, we observe a numerical drawback of the original nonmonotone line search and suggest a standard Armijo line search when the nonmonotone line search condition is not satisfied by the prior trial steplength. The numerical results show the usefulness of such suggestion for the inexact Newton method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号