首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence result under traditional line searches such as Armijo, Wolfe and Goldstein line searches. In this paper a convergent version of Liu–Storey conjugate gradient method (LS in short) is proposed for minimizing functions that have Lipschitz continuous partial derivatives. By estimating the Lipschitz constant of the derivative of objective functions, we can find an adequate step size at each iteration so as to guarantee the global convergence and improve the efficiency of LS method in practical computation.  相似文献   

2.
In this paper, we describe an implementation and give performance results for a conjugate gradient algorithm for unconstrained optimization. The algorithm is based upon the Nazareth three-term formula and incorporates Allwright preconditioning matrices and restart tests. The performance results for this combination compare favorably with existing codes.The support of the Science and Engineering Research Council is gratefully acknowledged.  相似文献   

3.
The unconstrained optimization of a function of several variables is considered. An algorithm is constructed using the notion of generalized conjugate directions. It is proved that this method will find the minimum of a quadratic function in a finite number of steps. Some well-known conjugate direction methods are shown to be special cases of the generalized method.  相似文献   

4.
An erratic formulation of the construction of anA-conjugate pair given in Ref. 1 is corrected.  相似文献   

5.
A conjugate direction algorithm without line searches   总被引:1,自引:0,他引:1  
We develop an algorithm which generates conjugate search directions and maintains finite termination, when applied to quadratic functions, without requiring that line searches be exact. The technique requiresO(n) storage, wheren is the dimension of the problem. Results when the algorithm is applied to a number of standard test problems are included.Work was performed under the auspices of the US Energy Research and Development Administration.The author would like especially to thank Marie-Anne Neimat, who put much effort into the programming of the algorithm and the generation of test results.  相似文献   

6.
本文在很弱的条件下得到了无约束最优化的Polak-Ribiere和Hestenes-Stiefel共轭梯度法的全局收敛性的新结果,这里PR方法和HS方法中的参数β^PRk和β^HSk可以在某个负的区域内取值,这一负的区域与k有关,这些新的收敛性结果改进了文献中已有的结果。数值检验的结果表明了本文中新的PR方法和HS方法是相当有效的。  相似文献   

7.
In this article, we investigate the convergence rate of the CG-DESCENT method proposed by Hager and Zhang [1 W. W. Hager and H. Zhang ( 2005 ). A new conjugate gradient method with guaranteed descent and an effcient line search . SIAM Journal of Optimization 16 : 170192 .[Crossref], [Web of Science ®] [Google Scholar]]. Under reasonable conditions, we show that the CG-DESCENT method with the Wolfe line search will be n-step superlinear and even quadratic convergence if some restart technique is used. Some numerical results are also reported to verify the theoretical results.  相似文献   

8.
A family of accelerated conjugate direction methods, corresponding to the Broyden family of quasi-Newton methods, is described. It is shown thatall members of the family generate the same sequence of points approximating the optimum and the same sequence of search directions, provided only that each direction vector is normalized before the stepsize to be taken in that direction is determined.With minimal restrictions on how the stepsize is determined (sufficient only for convergence), the accelerated methods applied to the optimization of a function ofn variables are shown to have an (n+1)-step quadratic rate of convergence. Furthermore, the information needed to generate an accelerating step can be stored in a singlen-vector, rather than the usualn×n symmetric matrix, without changing the theoretical order of convergence.The relationships between this family of methods and existing conjugate direction methods are discussed, and numerical experience with two members of the family is presented.This research was sponsored by the United States Army under Contract No. DAAG29-75-C-0024.The author gratefully acknowledges the valuable assistance of Julia H. Gray, of the Mathematics Research Center, University of Wisconsin, Madison, who painstakingly programmed these methods and obtained the computational results.  相似文献   

9.
In 1952, Hestenes and Stiefel first established, along with the conjugate-gradient algorithm, fundamental relations which exist between conjugate direction methods for function minimization on the one hand and Gram-Schmidt processes relative to a given positive-definite, symmetric matrix on the other. This paper is based on a recent reformulation of these relations by Hestenes which yield the conjugate Gram-Schmidt (CGS) algorithm. CGS includes a variety of function minimization routines, one of which is the conjugate-gradient routine. This paper gives the basic equations of CGS, including the form applicable to minimizing general nonquadratic functions ofn variables. Results of numerical experiments of one form of CGS on five standard test functions are presented. These results show that this version of CGS is very effective.The preparation of this paper was sponsored in part by the US Army Research Office, Grant No. DH-ARO-D-31-124-71-G18.The authors wish to thank Mr. Paul Speckman for the many computer runs made using these algorithms. They served as a good check on the results which they had obtained earlier. Special thanks must go to Professor M. R. Hestenes whose constant encouragement and assistance made this paper possible.  相似文献   

10.
Another hybrid conjugate gradient algorithm is subject to analysis. The parameter β k is computed as a convex combination of (Hestenes-Stiefel) and (Dai-Yuan) algorithms, i.e. . The parameter θ k in the convex combination is computed in such a way so that the direction corresponding to the conjugate gradient algorithm to be the Newton direction and the pair (s k , y k ) to satisfy the quasi-Newton equation , where and . The algorithm uses the standard Wolfe line search conditions. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms the Hestenes-Stiefel and the Dai-Yuan conjugate gradient algorithms as well as the hybrid conjugate gradient algorithms of Dai and Yuan. A set of 750 unconstrained optimization problems are used, some of them from the CUTE library.   相似文献   

11.
In this paper, we propose a globally convergent Polak-Ribière-Polyak (PRP)conjugate gradient method for nonconvex minimization of differentiable functions by employing an Armijo-type line search which is simpler and less demanding than those defined in [4,10]. A favorite property of this method is that we can choose the initial stepsize as the one-dimensional minimizer of a quadratic model Φ(t):= f(xk)+tgTkdk+1/2t2dTkQkdk, where Qk is a positive definite matrix that carries some second order information of the objective function f. So, this line search may make the stepsize tk more easily accepted. Preliminary numerical results show that this method is efficient.  相似文献   

12.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence under traditional line searches such as Armijo line search, Wolfe line search, and Goldstein line search. In this paper we propose a new nonmonotone line search for Liu-Storey conjugate gradient method (LS in short). The new nonmonotone line search can guarantee the global convergence of LS method and has a good numerical performance. By estimating the Lipschitz constant of the derivative of objective functions in the new nonmonotone line search, we can find an adequate step size and substantially decrease the number of functional evaluations at each iteration. Numerical results show that the new approach is effective in practical computation.  相似文献   

13.
In this paper we propose a new line search algorithm that ensures global convergence of the Polak-Ribière conjugate gradient method for the unconstrained minimization of nonconvex differentiable functions. In particular, we show that with this line search every limit point produced by the Polak-Ribière iteration is a stationary point of the objective function. Moreover, we define adaptive rules for the choice of the parameters in a way that the first stationary point along a search direction can be eventually accepted when the algorithm is converging to a minimum point with positive definite Hessian matrix. Under strong convexity assumptions, the known global convergence results can be reobtained as a special case. From a computational point of view, we may expect that an algorithm incorporating the step-size acceptance rules proposed here will retain the same good features of the Polak-Ribière method, while avoiding pathological situations. This research was supported by Agenzia Spaziale Italiana, Rome, Italy.  相似文献   

14.
In this paper, we propose a globally convergent Polak-Ribiere-Polyak (PRP) conjugate gradient method for nonconvex minimization of differentiable functions by employing an Armijo-type line search which is simpler and less demanding than those defined in [4,10]. A favorite property of this method is that we can choose the initial stepsize as the one-dimensional minimizer of a quadratic modelΦ(t):= f(xk) tgkTdk (1/2) t2dkTQkdk, where Qk is a positive definite matrix that carries some second order information of the objective function f. So, this line search may make the stepsize tk more easily accepted. Preliminary numerical results show that this method is efficient.  相似文献   

15.
The effect of nonlinearly scaling the objective function on the variable-metric method is investigated, and Broyden's update is modified so that a property of invariancy to the scaling is satisfied. A new three-parameter class of updates is generated, and criteria for an optimal choice of the parameters are given. Numerical experiments compare the performance of a number of algorithms of the resulting class.The author is indebted to Professor S. S. Oren, Economic Engineering Department, Stanford University, Stanford, California, for stimulating discussions during the development of this paper. He also recognizes the financial support by the National Research Council of Italy (CNR) for his stay at Stanford University.  相似文献   

16.
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. This paper proposes a three-parameter family of hybrid conjugate gradient methods. Two important features of the family are that (i) it can avoid the propensity of small steps, namely, if a small step is generated away from the solution point, the next search direction will be close to the negative gradient direction; and (ii) its descent property and global convergence are likely to be achieved provided that the line search satisfies the Wolfe conditions. Some numerical results with the family are also presented.

  相似文献   


17.
论邓-王定理     
对王曾叙述了并证明了若干重要定理,本文给出在稍稍不同条件下的类似定理的更简单的证明,讨论这些定理的含义、作用并予以推广。  相似文献   

18.
Quasi-Newton method is a well-known effective method for solving optimization problems. Since it is a line search method, which needs a line search procedure after determining a search direction at each iteration, we must decide a line search rule to choose a step size along a search direction. In this paper, we propose a new inexact line search rule for quasi-Newton method and establish some global convergent results of this method. These results are useful in designing new quasi-Newton methods. Moreover, we analyze the convergence rate of quasi-Newton method with the new line search rule.  相似文献   

19.
A conjugate-gradient optimization method which is invariant to nonlinear scaling of a quadratic form is introduced. The technique has the property that the search directions generated are identical to those produced by the classical Fletcher-Reeves algorithm applied to the quadratic form. The approach enables certain nonquadratic functions to be minimized in a finite number of steps. Several examples which illustrate the efficacy of the method are included.  相似文献   

20.
A modified PRP conjugate gradient method   总被引:4,自引:0,他引:4  
This paper gives a modified PRP method which possesses the global convergence of nonconvex function and the R-linear convergence rate of uniformly convex function. Furthermore, the presented method has sufficiently descent property and characteristic of automatically being in a trust region without carrying out any line search technique. Numerical results indicate that the new method is interesting for the given test problems. This work is supported by Guangxi University SF grands X061041 and China NSF grands 10761001.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号