首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this work, we present a new limited memory conjugate gradient method which is based on the study of Perry’s method. An attractive property of the proposed method is that it corrects the loss of orthogonality that can occur in ill-conditioned optimization problems, which can decelerate the convergence of the method. Moreover, an additional advantage is that the memory is only used to monitor the orthogonality relatively cheaply; and when orthogonality is lost, the memory is used to generate a new orthogonal search direction. Under mild conditions, we establish the global convergence of the proposed method provided that the line search satisfies the Wolfe conditions. Our numerical experiments indicate the efficiency and robustness of the proposed method.  相似文献   

2.
In this paper, we present a conjugate gradient method for solving unconstrained optimization problems. Motivated by Perry conjugate gradient method and Dai-Liao method, an improved Perry update matrix is proposed to overcome the non-symmetric positive definite property of the Perry matrix. The parameter in the update matrix is determined by minimizing the condition number of the iterative matrix which can ensure the positive definite property. The obtained method can also be considered as a modified form of CG-DESCENT method with an adjusted term. Under some mild conditions, the presented method is global convergent. Numerical experiments under CUTEst environment show that the proposed algorithm is promising.  相似文献   

3.
Numerical Algorithms - In this paper, we present a family of Perry conjugate gradient methods for solving large-scale systems of monotone nonlinear equations. The methods are developed by combining...  相似文献   

4.
5.
The linear conjugate gradient method is an optimal method for convex quadratic minimization due to the Krylov subspace minimization property. The proposition of limited-memory BFGS method and Barzilai-Borwein gradient method, however, heavily restricted the use of conjugate gradient method for large-scale nonlinear optimization. This is, to the great extent, due to the requirement of a relatively exact line search at each iteration and the loss of conjugacy property of the search directions in various occasions. On the contrary, the limited-memory BFGS method and the Barzilai-Bowein gradient method share the so-called asymptotical one stepsize per line-search property, namely, the trial stepsize in the method will asymptotically be accepted by the line search when the iteration is close to the solution. This paper will focus on the analysis of the subspace minimization conjugate gradient method by Yuan and Stoer (1995). Specifically, if choosing the parameter in the method by combining the Barzilai-Borwein idea, we will be able to provide some efficient Barzilai-Borwein conjugate gradient (BBCG) methods. The initial numerical experiments show that one of the variants, BBCG3, is specially efficient among many others without line searches. This variant of the BBCG method might enjoy the asymptotical one stepsize per line-search property and become a strong candidate for large-scale nonlinear optimization.  相似文献   

6.
A modified PRP conjugate gradient method   总被引:4,自引:0,他引:4  
This paper gives a modified PRP method which possesses the global convergence of nonconvex function and the R-linear convergence rate of uniformly convex function. Furthermore, the presented method has sufficiently descent property and characteristic of automatically being in a trust region without carrying out any line search technique. Numerical results indicate that the new method is interesting for the given test problems. This work is supported by Guangxi University SF grands X061041 and China NSF grands 10761001.  相似文献   

7.
《Optimization》2012,61(2):163-179
In this article, we consider the global convergence of the Polak–Ribiére–Polyak conjugate gradient method (abbreviated PRP method) for minimizing functions that have Lipschitz continuous partial derivatives. A novel form of non-monotone line search is proposed to guarantee the global convergence of the PRP method. It is also shown that the PRP method has linear convergence rate under some mild conditions when the non-monotone line search reduces to a related monotone line search. The new non-monotone line search needs to estimate the Lipschitz constant of the gradients of objective functions, for which two practical estimations are proposed to help us to find a suitable initial step size for the PRP method. Numerical results show that the new line search approach is efficient in practical computation.  相似文献   

8.
In this paper, we describe an implementation and give performance results for a conjugate gradient algorithm for unconstrained optimization. The algorithm is based upon the Nazareth three-term formula and incorporates Allwright preconditioning matrices and restart tests. The performance results for this combination compare favorably with existing codes.The support of the Science and Engineering Research Council is gratefully acknowledged.  相似文献   

9.
On the truncated conjugate gradient method   总被引:7,自引:0,他引:7  
In this paper, we consider the truncated conjugate gradient method for minimizing a convex quadratic function subject to a ball trust region constraint. It is shown that the reduction in the objective function by the solution obtained by the truncated CG method is at least half of the reduction by the global minimizer in the trust region. Received January 19, 1999 / Revised version received October 1, 1999?Published online November 30, 1999  相似文献   

10.
Restart procedures for the conjugate gradient method   总被引:1,自引:0,他引:1  
A compact and flexible updating procedure using matrix augmentation is developed. It is shown that the representation of the updated inverse does not grow monotonically in size, and may actually decrease during certain simplex iterations. Angular structures, such as GUB, are handled naturally within the partitioning framework, and require no modifications of the simplex method.  相似文献   

11.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence result under traditional line searches such as Armijo, Wolfe and Goldstein line searches. In this paper a convergent version of Liu–Storey conjugate gradient method (LS in short) is proposed for minimizing functions that have Lipschitz continuous partial derivatives. By estimating the Lipschitz constant of the derivative of objective functions, we can find an adequate step size at each iteration so as to guarantee the global convergence and improve the efficiency of LS method in practical computation.  相似文献   

12.
New properties of a nonlinear conjugate gradient method   总被引:6,自引:0,他引:6  
Summary. This paper provides several new properties of the nonlinear conjugate gradient method in [5]. Firstly, the method is proved to have a certain self-adjusting property that is independent of the line search and the function convexity. Secondly, under mild assumptions on the objective function, the method is shown to be globally convergent with a variety of line searches. Thirdly, we find that instead of the negative gradient direction, the search direction defined by the nonlinear conjugate gradient method in [5] can be used to restart any optimization method while guaranteeing the global convergence of the method. Some numerical results are also presented. Received March 12, 1999 / Revised version received April 25, 2000 / Published online February 5, 2001  相似文献   

13.
基于著名的PRP共轭梯度方法,利用CG_DESCENT共轭梯度方法的结构,本文提出了一种求解大规模无约束最优化问题的修正PRP共轭梯度方法。该方法在每一步迭代中均能够产生一个充分下降的搜索方向,且独立于任何线搜索条件。在标准Wolfe线搜索条件下,证明了修正PRP共轭梯度方法的全局收敛性和线性收敛速度。数值结果展示了修正PRP方法对给定的测试问题是非常有效的。  相似文献   

14.
By introducing quadratic penalty terms, a convex non-separable quadratic network program can be reduced to an unconstrained optimization problem whose objective function is a piecewise quadratic and continuously differentiable function. A conjugate gradient method is applied to the reduced problem and its convergence is proved. The computation exploits the special network data structures originated from the network simplex method. This algorithmic framework allows direct extension to multicommodity cost flows. Some preliminary computational results are presented.  相似文献   

15.
It has been conjectured that the conjugate gradient method for minimizing functions of several variables has a superlinear rate of convergence, but Crowder and Wolfe show by example that the conjecture is false. Now the stronger result is given that, if the objective function is a convex quadratic and if the initial search direction is an arbitrary downhill direction, then either termination occurs or the rate of convergence is only linear, the second possibility being more usual. Relations between the starting point and the initial search direction that are necessary and sufficient for termination in the quadratic case are studied.  相似文献   

16.
The conjugate gradient (CG) method is widely used to solve a positive definite linear system of order . It is well known that the relative residual of the th approximate solution by CG (with the initial approximation ) is bounded above by

   with

where is 's spectral condition number. In 1963, Meinardus (Numer. Math., 5 (1963), pp. 14-23) gave an example to achieve this bound for but without saying anything about all other . This very example can be used to show that the bound is sharp for any given by constructing examples to attain the bound, but such examples depend on and for them the th residual is exactly zero. Therefore it would be interesting to know if there is any example on which the CG relative residuals are comparable to the bound for all . There are two contributions in this paper:
  1. A closed formula for the CG residuals for all on Meinardus' example is obtained, and in particular it implies that the bound is always within a factor of of the actual residuals;
  2. A complete characterization of extreme positive linear systems for which the th CG residual achieves the bound is also presented.

  相似文献   


17.
In this paper, HS conjugate gradient method for minimizing a continuously differentiable functionf onR n is modified to have global convergence property. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continuously differentiable function with Curry-Altman’s step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with Armijo step size rule are established. Numerical results show that the new algorithms are efficient.  相似文献   

18.
基于著名的PRP共轭梯度方法,利用CG_DESCENT共轭梯度方法的结构,本文提出了一种求解大规模无约束最优化问题的修正PRP共轭梯度方法。该方法在每一步迭代中均能够产生一个充分下降的搜索方向,且独立于任何线搜索条件。在标准Wolfe线搜索条件下,证明了修正PRP共轭梯度方法的全局收敛性和线性收敛速度。数值结果展示了修正PRP方法对给定的测试问题是非常有效的。  相似文献   

19.
Summary A generalizeds-term truncated conjugate gradient method of least square type, proposed in [1a, b], is extended to a form more suitable for proving when the truncated version is identical to the full-term version. Advantages with keeping a control term in the truncated version is pointed out. A computationally efficient new algorithm, based on a special inner product with a small demand of storage is also presented.We also give simplified and slightly extended proofs of termination of the iterative sequence and of existence of ans-term recursion, identical to the full-term version. Important earlier results on this latter topic are found in [15, 16, 8 and 11].The research reported in this paper was partly supported by NATO Grant No. 648/83  相似文献   

20.
The conjugate gradient method for the iterative solution of a set of linear equationsAx=b is essentially equivalent to the Lanczos method, which implies that approximations to certain eigen-values ofA can be obtained at low cost. In this paper it is shown how the smallest active eigenvalue ofA can be cheaply approximated, and the usefulness of this approximation for a practical termination criterion for the conjugate gradient method is studied. It is proved that this termination criterion is reliable in many relevant situations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号