首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Following the approach proposed by Dai and Liao, we introduce two nonlinear conjugate gradient methods for unconstrained optimization problems. One of our proposed methods is based on a modified version of the secant equation proposed by Zhang, Deng and Chen, and Zhang and Xu, and the other is based on the modified BFGS update proposed by Yuan. An interesting feature of our methods is their account of both the gradient and function values. Under proper conditions, we show that one of the proposed methods is globally convergent for general functions and that the other is globally convergent for uniformly convex functions. To enhance the performance of the line search procedure, we also propose a new approach for computing the initial steplength to be used for initiating the procedure. We provide a comparison of implementations of our methods with the efficient conjugate gradient methods proposed by Dai and Liao, and Hestenes and Stiefel. Numerical test results show the efficiency of our proposed methods.  相似文献   

2.
It is well known that global convergence has not been established for the Polak-Ribière-Polyak (PRP) conjugate gradient method using the standard Wolfe conditions. In the convergence analysis of PRP method with Wolfe line search, the (sufficient) descent condition and the restriction βk?0 are indispensable (see [4,7]). This paper shows that these restrictions could be relaxed. Under some suitable conditions, by using a modified Wolfe line search, global convergence results were established for the PRP method. Some special choices for βk which can ensure the search direction’s descent property were also discussed in this paper. Preliminary numerical results on a set of large-scale problems were reported to show that the PRP method’s computational efficiency is encouraging.  相似文献   

3.
1. IntroductionWe consider the global convergence of conjugate gradient methods for the unconstrainednonlinear optimization problemadn f(x),where f: Re - RI is continuously dtherelltiable and its gradiellt is denoted by g. Weconsider only the cajse where the methods are implemented without regular restarts. Theiterative formula is given byXk 1 = Xk Akdk, (1'1).and the seaxch direction da is defined bywhere gb is a scalar, ^k is a stenlength, and gb denotes g(xk).The best-known formulas fo…  相似文献   

4.
Conjugate gradient methods are appealing for large scale nonlinear optimization problems, because they avoid the storage of matrices. Recently, seeking fast convergence of these methods, Dai and Liao (Appl. Math. Optim. 43:87–101, 2001) proposed a conjugate gradient method based on the secant condition of quasi-Newton methods, and later Yabe and Takano (Comput. Optim. Appl. 28:203–225, 2004) proposed another conjugate gradient method based on the modified secant condition. In this paper, we make use of a multi-step secant condition given by Ford and Moghrabi (Optim. Methods Softw. 2:357–370, 1993; J. Comput. Appl. Math. 50:305–323, 1994) and propose two new conjugate gradient methods based on this condition. The methods are shown to be globally convergent under certain assumptions. Numerical results are reported.  相似文献   

5.
A modified PRP conjugate gradient method   总被引:4,自引:0,他引:4  
This paper gives a modified PRP method which possesses the global convergence of nonconvex function and the R-linear convergence rate of uniformly convex function. Furthermore, the presented method has sufficiently descent property and characteristic of automatically being in a trust region without carrying out any line search technique. Numerical results indicate that the new method is interesting for the given test problems. This work is supported by Guangxi University SF grands X061041 and China NSF grands 10761001.  相似文献   

6.
In this paper, we take a little modification to the Wei–Yao–Liu nonlinear conjugate gradient method proposed by Wei et al. [Z. Wei, S. Yao, L. Liu, The convergence properties of some new conjugate gradient methods, Appl. Math. Comput. 183 (2006) 1341–1350] such that the modified method possesses better convergence properties. In fact, we prove that the modified method satisfies sufficient descent condition with greater parameter in the strong Wolfe line search and converges globally for nonconvex minimization. We also extend these results to the Hestenes–Stiefel method and prove that the modified HS method is globally convergent for nonconvex functions with the standard Wolfe conditions. Numerical results are reported by using some test problems in the CUTE library.  相似文献   

7.
In this paper, we deal with conjugate gradient methods for solving nonlinear least squares problems. Several Newton-like methods have been studied for solving nonlinear least squares problems, which include the Gauss-Newton method, the Levenberg-Marquardt method and the structured quasi-Newton methods. On the other hand, conjugate gradient methods are appealing for general large-scale nonlinear optimization problems. By combining the structured secant condition and the idea of Dai and Liao (2001) [20], the present paper proposes conjugate gradient methods that make use of the structure of the Hessian of the objective function of nonlinear least squares problems. The proposed methods are shown to be globally convergent under some assumptions. Finally, some numerical results are given.  相似文献   

8.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence under traditional line searches such as Armijo line search, Wolfe line search, and Goldstein line search. In this paper we propose a new nonmonotone line search for Liu-Storey conjugate gradient method (LS in short). The new nonmonotone line search can guarantee the global convergence of LS method and has a good numerical performance. By estimating the Lipschitz constant of the derivative of objective functions in the new nonmonotone line search, we can find an adequate step size and substantially decrease the number of functional evaluations at each iteration. Numerical results show that the new approach is effective in practical computation.  相似文献   

9.
It is well-known that the HS method and the PRP method may not converge for nonconvex optimization even with exact line search. Some globalization techniques have been proposed, for instance, the PRP+ globalization technique and the Grippo-Lucidi globalization technique for the PRP method. In this paper, we propose a new efficient globalization technique for general nonlinear conjugate gradient methods for nonconvex minimization. This new technique utilizes the information of the previous search direction sufficiently. Under suitable conditions, we prove that the nonlinear conjugate gradient methods with this new technique are globally convergent for nonconvex minimization if the line search satisfies Wolfe conditions or Armijo condition. Extensive numerical experiments are reported to show the efficiency of the proposed technique.  相似文献   

10.
Global convergence result for conjugate gradient methods   总被引:2,自引:0,他引:2  
Conjugate gradient optimization algorithms depend on the search directions,
  相似文献   

11.
《Optimization》2012,61(7):1027-1042
In order to take advantage of the attractive features of the Hestenes–Stiefel and Dai–Yuan conjugate gradient (CG) methods, we suggest two hybridizations of these methods based on Andrei's approach of hybridizing the CG parameters convexly and Powell's approach of nonnegative restriction of the CG parameters. The hybridization parameter in our methods is computed from a modified secant equation obtained based on the search direction of the Hager–Zhang nonlinear CG method. We show that if the line search fulfils the Wolfe conditions, then one of our methods is globally convergent for uniformly convex functions and the other is globally convergent for general functions. We report some numerical results demonstrating the efficiency of our methods in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

12.
一个充分下降的有效共轭梯度法   总被引:2,自引:0,他引:2  
对于大规模无约束优化问题,本文提出了一个充分下降的共轭梯度法公式,并建立相应的算法.该算法在不依赖于任何线搜索条件下,每步迭代都能产生一个充分下降方向.若采用标准Wolfe非精确线搜索求步长,则在常规假设条件下可获得算法良好的全局收敛性最后,对算法进行大规模数值试验,并采用Dolan和More的性能图对试验效果进行刻画,结果表明该算法是有效的.  相似文献   

13.
In this study, a modified spectral conjugate gradient projection method is presented to solve total variation image restoration, which is transferred into the nonlinear constrained optimization with the closed constrained set. The global convergence of the proposed scheme is analyzed. In the end, some numerical results illustrate the efficiency of this method.  相似文献   

14.
Linear systems with complex coefficients arise from various physical problems. Examples are the Helmholtz equation and Maxwell equations approximated by finite difference or finite element methods, that lead to large sparse linear systems. When the continuous problem is reduced to integral equations, after discretization, one obtains a dense linear system. The resulting matrices are generally non-Hermitian but, most of the time, symmetric and consequently the classical conjugate gradient method cannot be directly applied. Usually, these linear systems have to be solved with a large number of unknowns because, for instance, in electromagnetic scattering problems the mesh size must be related to the wave length of the incoming wave. The higher the frequency of the incoming wave, the smaller the mesh size must be. When one wants to solve 3D-problems, it is no longer practical to use direct method solvers, because of the huge memory they need. So iterative methods are attractive for this kind of problems, even though their convergence cannot be always guaranteed with theoretical results. In this paper we derive several methods from a unified framework and we numerically compare these algorithms on some test problems.  相似文献   

15.
《Optimization》2012,61(4):993-1009
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, we propose a new two-parameter family of conjugate gradient methods for unconstrained optimization. The two-parameter family of methods not only includes the already existing three practical nonlinear conjugate gradient methods, but has other family of conjugate gradient methods as subfamily. The two-parameter family of methods with the Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the two-parameter family of methods. The numerical results show that this method is efficient for the given test problems. In addition, the methods related to this family are uniformly discussed.  相似文献   

16.
《Optimization》2012,61(5):1173-1175
In this note, we show that the assumption condition (3.3) in Theorem 3.2 of Babaie-Kafaki can be deleted. As mentioned in Babaie-Kafaki, it is not appropriate to consider condition (3.3) as an assumption. Throughout, we use the same notations and equation numbers as in Babaie-Kafaki.  相似文献   

17.
A modified conjugate gradient method is presented for solving unconstrained optimization problems, which possesses the following properties: (i) The sufficient descent property is satisfied without any line search; (ii) The search direction will be in a trust region automatically; (iii) The Zoutendijk condition holds for the Wolfe–Powell line search technique; (iv) This method inherits an important property of the well-known Polak–Ribière–Polyak (PRP) method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. The global convergence and the linearly convergent rate of the given method are established. Numerical results show that this method is interesting.  相似文献   

18.
《Optimization》2012,61(11):2277-2287
Two adaptive choices for the parameter of Dai–Liao conjugate gradient (CG) method are suggested. One of which is obtained by minimizing the distance between search directions of Dai–Liao method and a three-term CG method proposed by Zhang et al. and the other one is obtained by minimizing Frobenius condition number of the search direction matrix. Global convergence analyses are made briefly. Numerical results are reported; they demonstrate effectiveness of the suggested adaptive choices.  相似文献   

19.
20.
Abstract. The global convergence of the general three-term conjugate gradient methods withthe relaxed strong Wolfe line-search is proved.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号