首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Conjugate gradient methods are appealing for large scale nonlinear optimization problems, because they avoid the storage of matrices. Recently, seeking fast convergence of these methods, Dai and Liao (Appl. Math. Optim. 43:87–101, 2001) proposed a conjugate gradient method based on the secant condition of quasi-Newton methods, and later Yabe and Takano (Comput. Optim. Appl. 28:203–225, 2004) proposed another conjugate gradient method based on the modified secant condition. In this paper, we make use of a multi-step secant condition given by Ford and Moghrabi (Optim. Methods Softw. 2:357–370, 1993; J. Comput. Appl. Math. 50:305–323, 1994) and propose two new conjugate gradient methods based on this condition. The methods are shown to be globally convergent under certain assumptions. Numerical results are reported.  相似文献   

2.
A variant of the preconditioned conjugate gradient method to solve generalized least squares problems is presented. If the problem is min (Axb)TW−1(Axb) with ARm×n and WRm×m symmetric and positive definite, the method needs only a preconditioner A1Rn×n, but not the inverse of matrix W or of any of its submatrices. Freund's comparison result for regular least squares problems is extended to generalized least squares problems. An error bound is also given.  相似文献   

3.
Following the approach proposed by Dai and Liao, we introduce two nonlinear conjugate gradient methods for unconstrained optimization problems. One of our proposed methods is based on a modified version of the secant equation proposed by Zhang, Deng and Chen, and Zhang and Xu, and the other is based on the modified BFGS update proposed by Yuan. An interesting feature of our methods is their account of both the gradient and function values. Under proper conditions, we show that one of the proposed methods is globally convergent for general functions and that the other is globally convergent for uniformly convex functions. To enhance the performance of the line search procedure, we also propose a new approach for computing the initial steplength to be used for initiating the procedure. We provide a comparison of implementations of our methods with the efficient conjugate gradient methods proposed by Dai and Liao, and Hestenes and Stiefel. Numerical test results show the efficiency of our proposed methods.  相似文献   

4.
Two modified Dai-Yuan nonlinear conjugate gradient methods   总被引:1,自引:0,他引:1  
In this paper, we propose two modified versions of the Dai-Yuan (DY) nonlinear conjugate gradient method. One is based on the MBFGS method (Li and Fukushima, J Comput Appl Math 129:15–35, 2001) and inherits all nice properties of the DY method. Moreover, this method converges globally for nonconvex functions even if the standard Armijo line search is used. The other is based on the ideas of Wei et al. (Appl Math Comput 183:1341–1350, 2006), Zhang et al. (Numer Math 104:561–572, 2006) and possesses good performance of the Hestenes-Stiefel method. Numerical results are also reported. This work was supported by the NSF foundation (10701018) of China.  相似文献   

5.
It is well known that the sufficient descent condition is very important to the global convergence of the nonlinear conjugate gradient method. In this paper, some modified conjugate gradient methods which possess this property are presented. The global convergence of these proposed methods with the weak Wolfe–Powell (WWP) line search rule is established for nonconvex function under suitable conditions. Numerical results are reported. This work is supported by Guangxi University SF grands X061041 and China NSF grands 10761001.  相似文献   

6.
A modified PRP conjugate gradient method   总被引:4,自引:0,他引:4  
This paper gives a modified PRP method which possesses the global convergence of nonconvex function and the R-linear convergence rate of uniformly convex function. Furthermore, the presented method has sufficiently descent property and characteristic of automatically being in a trust region without carrying out any line search technique. Numerical results indicate that the new method is interesting for the given test problems. This work is supported by Guangxi University SF grands X061041 and China NSF grands 10761001.  相似文献   

7.
This paper provides a modification to the Gauss—Newton method for nonlinear least squares problems. The new method is based on structured quasi-Newton methods which yield a good approximation to the second derivative matrix of the objective function. In particular, we propose BFGS-like and DFP-like updates in a factorized form which give descent search directions for the objective function. We prove local and q-superlinear convergence of our methods, and give results of computational experiments for the BFGS-like and DFP-like updates.This work was supported in part by the Grant-in-Aid for Encouragement of Young Scientists of the Japanese Ministry of Education: (A)61740133 and (A)62740137.  相似文献   

8.
A modified conjugate gradient method is presented for solving unconstrained optimization problems, which possesses the following properties: (i) The sufficient descent property is satisfied without any line search; (ii) The search direction will be in a trust region automatically; (iii) The Zoutendijk condition holds for the Wolfe–Powell line search technique; (iv) This method inherits an important property of the well-known Polak–Ribière–Polyak (PRP) method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. The global convergence and the linearly convergent rate of the given method are established. Numerical results show that this method is interesting.  相似文献   

9.
1. IntroductionWe consider the global convergence of conjugate gradient methods for the unconstrainednonlinear optimization problemadn f(x),where f: Re - RI is continuously dtherelltiable and its gradiellt is denoted by g. Weconsider only the cajse where the methods are implemented without regular restarts. Theiterative formula is given byXk 1 = Xk Akdk, (1'1).and the seaxch direction da is defined bywhere gb is a scalar, ^k is a stenlength, and gb denotes g(xk).The best-known formulas fo…  相似文献   

10.
《Optimization》2012,61(7):1027-1042
In order to take advantage of the attractive features of the Hestenes–Stiefel and Dai–Yuan conjugate gradient (CG) methods, we suggest two hybridizations of these methods based on Andrei's approach of hybridizing the CG parameters convexly and Powell's approach of nonnegative restriction of the CG parameters. The hybridization parameter in our methods is computed from a modified secant equation obtained based on the search direction of the Hager–Zhang nonlinear CG method. We show that if the line search fulfils the Wolfe conditions, then one of our methods is globally convergent for uniformly convex functions and the other is globally convergent for general functions. We report some numerical results demonstrating the efficiency of our methods in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

11.
In 1981, Dennis and Walker developed a convergence theory for structured secant methods which included the PSB and the DFP secant methods but not the straightforward structured version of the BFGS secant method. Here, we fill this gap in the theory by establishing a convergence theory for the structured BFGS secant method. A direct application of our new theory gives the first proof of local andq-superlinear convergence of the important structured BFGS secant method for the nonlinear least-squares problem, which is used by Dennis, Gay, and Welsh in the current version of the popular and successful NL2SOL code.This research was sponsored by SDIO/IST/ARO, AFOSR-85-0243, and DOE-DEFG05-86 ER-25017.A portion of this work is contained in the second author's doctoral thesis under the supervision of the other two authors in the Department of Mathematical Sciences, Rice University. The second author would like to thank Universidad del Valle, Cali, Columbia, for support during his graduate studies.An early draft of this work was presented at the SIAM 35th Anniversary Meeting, October 12–15, 1987, Denver, Colorado.  相似文献   

12.
13.
In this paper, we make a modification to the Liu-Storey (LS) conjugate gradient method and propose a descent LS method. The method can generate sufficient descent directions for the objective function. This property is independent of the line search used. We prove that the modified LS method is globally convergent with the strong Wolfe line search. The numerical results show that the proposed descent LS method is efficient for the unconstrained problems in the CUTEr library.  相似文献   

14.
15.
In this paper, a three-term conjugate gradient algorithm is developed for solving large-scale unconstrained optimization problems. The search direction at each iteration of the algorithm is determined by rectifying the steepest descent direction with the difference between the current iterative points and that between the gradients. It is proved that such a direction satisfies the approximate secant condition as well as the conjugacy condition. The strategies of acceleration and restart are incorporated into designing the algorithm to improve its numerical performance. Global convergence of the proposed algorithm is established under two mild assumptions. By implementing the algorithm to solve 75 benchmark test problems available in the literature, the obtained results indicate that the algorithm developed in this paper outperforms the existent similar state-of-the-art algorithms.  相似文献   

16.
In this paper, we take a little modification to the Wei–Yao–Liu nonlinear conjugate gradient method proposed by Wei et al. [Z. Wei, S. Yao, L. Liu, The convergence properties of some new conjugate gradient methods, Appl. Math. Comput. 183 (2006) 1341–1350] such that the modified method possesses better convergence properties. In fact, we prove that the modified method satisfies sufficient descent condition with greater parameter in the strong Wolfe line search and converges globally for nonconvex minimization. We also extend these results to the Hestenes–Stiefel method and prove that the modified HS method is globally convergent for nonconvex functions with the standard Wolfe conditions. Numerical results are reported by using some test problems in the CUTE library.  相似文献   

17.
It is well known that global convergence has not been established for the Polak-Ribière-Polyak (PRP) conjugate gradient method using the standard Wolfe conditions. In the convergence analysis of PRP method with Wolfe line search, the (sufficient) descent condition and the restriction βk?0 are indispensable (see [4,7]). This paper shows that these restrictions could be relaxed. Under some suitable conditions, by using a modified Wolfe line search, global convergence results were established for the PRP method. Some special choices for βk which can ensure the search direction’s descent property were also discussed in this paper. Preliminary numerical results on a set of large-scale problems were reported to show that the PRP method’s computational efficiency is encouraging.  相似文献   

18.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence under traditional line searches such as Armijo line search, Wolfe line search, and Goldstein line search. In this paper we propose a new nonmonotone line search for Liu-Storey conjugate gradient method (LS in short). The new nonmonotone line search can guarantee the global convergence of LS method and has a good numerical performance. By estimating the Lipschitz constant of the derivative of objective functions in the new nonmonotone line search, we can find an adequate step size and substantially decrease the number of functional evaluations at each iteration. Numerical results show that the new approach is effective in practical computation.  相似文献   

19.
It is well-known that the HS method and the PRP method may not converge for nonconvex optimization even with exact line search. Some globalization techniques have been proposed, for instance, the PRP+ globalization technique and the Grippo-Lucidi globalization technique for the PRP method. In this paper, we propose a new efficient globalization technique for general nonlinear conjugate gradient methods for nonconvex minimization. This new technique utilizes the information of the previous search direction sufficiently. Under suitable conditions, we prove that the nonlinear conjugate gradient methods with this new technique are globally convergent for nonconvex minimization if the line search satisfies Wolfe conditions or Armijo condition. Extensive numerical experiments are reported to show the efficiency of the proposed technique.  相似文献   

20.
Global convergence result for conjugate gradient methods   总被引:2,自引:0,他引:2  
Conjugate gradient optimization algorithms depend on the search directions,
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号