共查询到20条相似文献,搜索用时 15 毫秒
1.
Gonglin Yuan Xiwen Lu Zengxin Wei 《Journal of Computational and Applied Mathematics》2009,233(2):519-530
A modified conjugate gradient method is presented for solving unconstrained optimization problems, which possesses the following properties: (i) The sufficient descent property is satisfied without any line search; (ii) The search direction will be in a trust region automatically; (iii) The Zoutendijk condition holds for the Wolfe–Powell line search technique; (iv) This method inherits an important property of the well-known Polak–Ribière–Polyak (PRP) method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. The global convergence and the linearly convergent rate of the given method are established. Numerical results show that this method is interesting. 相似文献
2.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence under traditional line searches such as Armijo line search, Wolfe line search, and Goldstein line search. In this paper we propose a new nonmonotone line search for Liu-Storey conjugate gradient method (LS in short). The new nonmonotone line search can guarantee the global convergence of LS method and has a good numerical performance. By estimating the Lipschitz constant of the derivative of objective functions in the new nonmonotone line search, we can find an adequate step size and substantially decrease the number of functional evaluations at each iteration. Numerical results show that the new approach is effective in practical computation. 相似文献
3.
It is well known that global convergence has not been established for the Polak-Ribière-Polyak (PRP) conjugate gradient method using the standard Wolfe conditions. In the convergence analysis of PRP method with Wolfe line search, the (sufficient) descent condition and the restriction βk?0 are indispensable (see [4,7]). This paper shows that these restrictions could be relaxed. Under some suitable conditions, by using a modified Wolfe line search, global convergence results were established for the PRP method. Some special choices for βk which can ensure the search direction’s descent property were also discussed in this paper. Preliminary numerical results on a set of large-scale problems were reported to show that the PRP method’s computational efficiency is encouraging. 相似文献
4.
John A. Ford Yasushi Narushima Hiroshi Yabe 《Computational Optimization and Applications》2008,40(2):191-216
Conjugate gradient methods are appealing for large scale nonlinear optimization problems, because they avoid the storage of
matrices. Recently, seeking fast convergence of these methods, Dai and Liao (Appl. Math. Optim. 43:87–101, 2001) proposed a conjugate gradient method based on the secant condition of quasi-Newton methods, and later Yabe and Takano (Comput.
Optim. Appl. 28:203–225, 2004) proposed another conjugate gradient method based on the modified secant condition. In this paper, we make use of a multi-step
secant condition given by Ford and Moghrabi (Optim. Methods Softw. 2:357–370, 1993; J. Comput. Appl. Math. 50:305–323, 1994) and propose two new conjugate gradient methods based on this condition. The methods are shown to be globally convergent
under certain assumptions. Numerical results are reported. 相似文献
5.
Two modified Dai-Yuan nonlinear conjugate gradient methods 总被引:1,自引:0,他引:1
Li Zhang 《Numerical Algorithms》2009,50(1):1-16
In this paper, we propose two modified versions of the Dai-Yuan (DY) nonlinear conjugate gradient method. One is based on
the MBFGS method (Li and Fukushima, J Comput Appl Math 129:15–35, 2001) and inherits all nice properties of the DY method. Moreover, this method converges globally for nonconvex functions even
if the standard Armijo line search is used. The other is based on the ideas of Wei et al. (Appl Math Comput 183:1341–1350,
2006), Zhang et al. (Numer Math 104:561–572, 2006) and possesses good performance of the Hestenes-Stiefel method. Numerical results are also reported.
This work was supported by the NSF foundation (10701018) of China. 相似文献
6.
Saman Babaie-Kafaki 《Journal of Computational and Applied Mathematics》2010,234(5):1374-1386
Following the approach proposed by Dai and Liao, we introduce two nonlinear conjugate gradient methods for unconstrained optimization problems. One of our proposed methods is based on a modified version of the secant equation proposed by Zhang, Deng and Chen, and Zhang and Xu, and the other is based on the modified BFGS update proposed by Yuan. An interesting feature of our methods is their account of both the gradient and function values. Under proper conditions, we show that one of the proposed methods is globally convergent for general functions and that the other is globally convergent for uniformly convex functions. To enhance the performance of the line search procedure, we also propose a new approach for computing the initial steplength to be used for initiating the procedure. We provide a comparison of implementations of our methods with the efficient conjugate gradient methods proposed by Dai and Liao, and Hestenes and Stiefel. Numerical test results show the efficiency of our proposed methods. 相似文献
7.
Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems 总被引:2,自引:0,他引:2
Gonglin Yuan 《Optimization Letters》2009,3(1):11-21
It is well known that the sufficient descent condition is very important to the global convergence of the nonlinear conjugate
gradient method. In this paper, some modified conjugate gradient methods which possess this property are presented. The global
convergence of these proposed methods with the weak Wolfe–Powell (WWP) line search rule is established for nonconvex function
under suitable conditions. Numerical results are reported.
This work is supported by Guangxi University SF grands X061041 and China NSF grands 10761001. 相似文献
8.
一种混合的HS-DY共轭梯度法 总被引:22,自引:3,他引:19
本文在HS方法和DY方法的基础上,综合两者的优势,提出了一种求解无约束优化问题的新的混合共轭梯度法.在Wolfe线搜索下,不需给定下降条件,证明了算法的全局收敛性.数值试验表明,新算法较之HS方法和PR方法更加有效. 相似文献
9.
10.
本文我们讨论了一簇共轭梯度法,它可被看作是FR法和DY法的凸组合.我们提出了两种Armijo型线搜索,并在这两种线搜索下,讨论了共轭梯度法簇的全局收敛性. 相似文献
11.
12.
本文通过结合牛顿法与PRP共轭梯度法提出一修正PRP方法,新方法中包含了二阶导数信息,在适当的假设下算法全局收敛,数值算例表明了算法的有效性. 相似文献
13.
Neculai Andrei 《Applied mathematics and computation》2009,213(2):361-369
Conjugate gradient methods are important for large-scale unconstrained optimization. This paper proposes an acceleration of these methods using a modification of steplength. The idea is to modify in a multiplicative manner the steplength αk, computed by Wolfe line search conditions, by means of a positive parameter ηk, in such a way to improve the behavior of the classical conjugate gradient algorithms. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with some conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that the accelerated computational scheme outperform the corresponding conjugate gradient algorithms. 相似文献
14.
三项共轭梯度法收敛性分析 总被引:5,自引:0,他引:5
1.引言考虑求解无约束光滑优化问题的线搜索方法其中al事先给定,山为搜索方向,Ik是步长因子.在经典的共轭梯度法中,对k三2,搜索方向dk由负梯度方向一gb和已有搜索方向小.1两个方向组成:其中山—-91,作为参数.关于参数作的计算公式很多,其中两个有名的计算公式称为*R公式和**P公式(见门和河1叩,它们分别为此处及以下11·11均指欧氏范数.在文献山中,Beale提出了搜索方向形如的三项重开始共轭梯度法,其中dt为重开始方向.Powellll]对这一方法引入了适当的重开始准则,获得了很好的数值结果.本文里,我们将研究搜索方向… 相似文献
15.
Aiguo Lu Hongwei LiuXiuyun Zheng Weijie Cong 《Applied mathematics and computation》2011,217(12):5547-5552
16.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence result under traditional line searches such as Armijo, Wolfe and Goldstein line searches. In this paper a convergent version of Liu–Storey conjugate gradient method (LS in short) is proposed for minimizing functions that have Lipschitz continuous partial derivatives. By estimating the Lipschitz constant of the derivative of objective functions, we can find an adequate step size at each iteration so as to guarantee the global convergence and improve the efficiency of LS method in practical computation. 相似文献
17.
18.
Zhifeng Dai 《Numerical Functional Analysis & Optimization》2013,34(8):892-906
In this article, we propose a new conjugate gradient type formula for computing unconstrained optimization problems. Its form is similar to the original PRP formula and it inherits all nice properties of the PRP method. By utilizing the technique of the three-term PRP method in Zhang [20] and modified PRP method in Yu [17], we propose two modified methods of the new formula. The two modified methods all can generate sufficient descent directions which is independent of the line search used. Under some mild conditions, the global convergence and the linearly convergent rate of the two modified methods are established. The numerical results show that the proposed methods are efficient. 相似文献
19.
20.
In this paper we propose a new line search algorithm that ensures global convergence of the Polak-Ribière conjugate gradient
method for the unconstrained minimization of nonconvex differentiable functions. In particular, we show that with this line
search every limit point produced by the Polak-Ribière iteration is a stationary point of the objective function. Moreover,
we define adaptive rules for the choice of the parameters in a way that the first stationary point along a search direction
can be eventually accepted when the algorithm is converging to a minimum point with positive definite Hessian matrix. Under
strong convexity assumptions, the known global convergence results can be reobtained as a special case. From a computational
point of view, we may expect that an algorithm incorporating the step-size acceptance rules proposed here will retain the
same good features of the Polak-Ribière method, while avoiding pathological situations.
This research was supported by Agenzia Spaziale Italiana, Rome, Italy. 相似文献