首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper extends some theoretical properties of the conjugate gradient-type method FLR (Ref. 1) for iteratively solving indefinite linear systems of equations. The latter algorithm is a generalization of the conjugate gradient method by Hestenes and Stiefel (CG, Ref. 2). We develop a complete relationship between the FLR algorithm and the Lanczos process, in the case of indefinite and possibly singular matrices. Then, we develop simple theoretical results for the FLR algorithm in order to construct an approximation of the Moore-Penrose pseudoinverse of an indefinite matrix. Our approach supplies the theoretical framework for applications within unconstrained optimization. This work was partially supported by the MIUR, FIRB Research Program on Large-Scale Nonlinear Optimization and by the Ministero delle Infrastrutture e dei Trasporti in the framework of the Research Program on Safety. The author thanks Stefano Lucidi and Massimo Roma for fruitful discussions plus the Associate Editor for effective comments.  相似文献   

2.
Efficient generalized conjugate gradient algorithms,part 2: Implementation   总被引:5,自引:0,他引:5  
In Part 1 of this paper (Ref. 1), a new, generalized conjugate gradient algorithm was proposed and its convergence investigated. In this second part, the new algorithm is compared numerically with other modified conjugate gradient methods and with limited-memory quasi-Newton methods.  相似文献   

3.
In this paper, we describe an application of the planar conjugate gradient method introduced in Part 1 (Ref. 1) and aimed at solving indefinite nonsingular sets of linear equations. We prove that it can be used fruitfully within optimization frameworks; in particular, we present a globally convergent truncated Newton scheme, which uses the above planar method for solving the Newton equation. Finally, our approach is tested over several problems from the CUTE collection (Ref. 2).This work was supported by MIUR, FIRB Research Program on Large-Scale Nonlinear Optimization, Rome, Italy.The author acknowledges Luigi Grippo and Stefano Lucidi, who contributed considerably to the elaboration of this paper. The exchange of experiences with Massimo Roma was a constant help in the investigation. The author expresses his gratitude to the Associate Editor and the referees for suggestions and corrections.  相似文献   

4.
This paper proposes a line search technique to satisfy a relaxed form of the strong Wolfe conditions in order to guarantee the descent condition at each iteration of the Polak-Ribière-Polyak conjugate gradient algorithm. It is proved that this line search algorithm preserves the usual convergence properties of any descent algorithm. In particular, it is shown that the Zoutendijk condition holds under mild assumptions. It is also proved that the resulting conjugate gradient algorithm is convergent under a strong convexity assumption. For the nonconvex case, a globally convergent modification is proposed. Numerical tests are presented. This paper is based on an earlier work presented at the International Symposium on Mathematical Programming in Lausanne in 1997. The author thanks J. C. Gilbert for his advice and M. Albaali for some recent discussions which motivated him to write this paper. Special thanks to G. Liu, J. Nocedal, and R. Waltz for the availability of the software CG+ and to one of the referees who indicated to him the paper of Grippo and Lucidi (Ref. 1).  相似文献   

5.
In this PaPer we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems.The methods are divided in two groups:the first group includes five basic CG methods and the second five hybrid CG methods.A collection of medium-scale and large-scale test problems are drawn from a standard code of test problems.CUTE.The conjugate gradient methods are ranked according to the numerical results.Some remarks are given.  相似文献   

6.
It is shown that the generalization of the conjugate direction method of Van Wyk (Ref. 1) is the direction counterpart to Fletcher's biconjugate gradient algorithm (Ref. 2).  相似文献   

7.
Both conjugate gradient and quasi-Newton methods are quite successful at minimizing smooth nonlinear functions of several variables, and each has its advantages. In particular, conjugate gradient methods require much less storage to implement than a quasi-Newton code and therefore find application when storage limitations occur. They are, however, slower, so there have recently been attempts to combine CG and QN algorithms so as to obtain an algorithm with good convergence properties and low storage requirements. One such method is the code CONMIN due to Shanno and Phua; it has proven quite successful but it has one limitation. It has no middle ground, in that it either operates as a quasi-Newton code using O(n 2) storage locations, or as a conjugate gradient code using 7n locations, but it cannot take advantage of the not unusual situation where more than 7n locations are available, but a quasi-Newton code requires an excessive amount of storage. In this paper we present a way of looking at conjugate gradient algorithms which was in fact given by Shanno and Phua but which we carry further, emphasize and clarify. This applies in particular to Beale's 3-term recurrence relation. Using this point of view, we develop a new combined CG-QN algorithm which can use whatever storage is available; CONMIN occurs as a special case. We present numerical results to demonstrate that the new algorithm is never worse than CONMIN and that it is almost always better if even a small amount of extra storage is provided.  相似文献   

8.
共轭梯度法是一类具有广泛应用的求解大规模无约束优化问题的方法. 提出了一种新的非线性共轭梯度(CG)法,理论分析显示新算法在多种线搜索条件下具有充分下降性. 进一步证明了新CG算法的全局收敛性定理. 最后,进行了大量数值实验,其结果表明与传统的几类CG方法相比,新算法具有更为高效的计算性能.  相似文献   

9.
王开荣  吴伟霞 《经济数学》2007,24(4):431-436
共轭梯度法是求解无约束最优化问题的有效方法.本文在βkDY的基础上对βk引入参数,提出了一类新共轭梯度法,并证明其在强Wolfe线性搜索条件下具有充分下降性和全局收敛性.  相似文献   

10.
关于共轭梯度法的下降性和收敛性   总被引:2,自引:0,他引:2  
本文给出了重新开始的一个准则,其准则是为保证共轭梯度法的下降性,我们不仅得到了具有不同参数选择的一般共轭梯度法的收敛性,而且将Ref.1中的结论给予推广。  相似文献   

11.
In this paper we consider computing estimates of the norm of the error in the conjugate gradient (CG) algorithm. Formulas were given in a paper by Golub and Meurant (1997). Here, we first prove that these expressions are indeed upper and lower bounds for the A-norm of the error. Moreover, starting from these formulas, we investigate the computation of the l 2-norm of the error. Finally, we define an adaptive algorithm where the approximations of the extreme eigenvalues that are needed to obtain upper bounds are computed when running CG leading to an improvement of the upper bounds for the norm of the error. Numerical experiments show the effectiveness of this algorithm. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

12.
We present modifications of the generalized conjugate gradient algorithm of Liu and Storey for unconstrained optimization problems (Ref. 1), extending its applicability to situations where the search directions are not defined. The use of new search directions is proposed and one additional condition is imposed on the inexact line search. The convergence of the resulting algorithm can be established under standard conditions for a twice continuously differentiable function with a bounded level set. Algorithms based on these modifications have been tested on a number of problems, showing considerable improvements. Comparisons with the BFGS and other quasi-Newton methods are also given.  相似文献   

13.
Mathematical programming is a rich and well-developed area in operations research. Nevertheless, there remain many challenging problems in this area, one of which is the large-scale optimization problem. In this article, a modified Hestenes and Stiefel (HS) conjugate gradient (CG) algorithm with a nonmonotone line search technique is presented. This algorithm possesses information about not only the gradient value but also the function value. Moreover, the sufficient descent condition holds without any line search. The global convergence is established for nonconvex functions under suitable conditions. Numerical results show that the proposed algorithm is advantageous to existing CG methods for large-scale optimization problems.  相似文献   

14.
In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one.  相似文献   

15.
In this Note, we formulate a sparse Krylov-based algorithm for solving large-scale linear systems of algebraic equations arising from the discretization of randomly parametrized (or stochastic) elliptic partial differential equations (SPDEs). We analyze the proposed sparse conjugate gradient (CG) algorithm within the framework of inexact Krylov subspace methods, prove its convergence and study its abstract computational cost. Numerical studies conducted on stochastic diffusion models show that the proposed sparse CG algorithm outperforms the classical CG method when the sought solutions admit a sparse representation in a polynomial chaos basis. In such cases, the sparse CG algorithm recovers almost exactly the sparsity pattern of the exact solutions, which enables accelerated convergence. In the case when the SPDE solution does not admit a sparse representation, the convergence of the proposed algorithm is very similar to the classical CG method.  相似文献   

16.
A new class of quasi-Newton methods is introduced that can locate a unique stationary point of ann-dimensional quadratic function in at mostn steps. When applied to positive-definite or negative-definite quadratic functions, the new class is identical to Huang's symmetric family of quasi-Newton methods (Ref. 1). Unlike the latter, however, the new family can handle indefinite quadratic forms and therefore is capable of solving saddlepoint problems that arise, for instance, in constrained optimization. The novel feature of the new class is a planar iteration that is activated whenever the algorithm encounters a near-singular direction of search, along which the objective function approaches zero curvature. In such iterations, the next point is selected as the stationary point of the objective function over a plane containing the problematic search direction, and the inverse Hessian approximation is updated with respect to that plane via a new four-parameter family of rank-three updates. It is shown that the new class possesses properties which are similar to or which generalize the properties of Huang's family. Furthermore, the new method is equivalent to Fletcher's (Ref. 2) modified version of Luenberger's (Ref. 3) hyperbolic pairs method, with respect to the metric defined by the initial inverse Hessian approximation. Several issues related to implementing the proposed method in nonquadratic cases are discussed.An earlier version of this paper was presented at the 10th Mathematical Programing Symposium, Montreal, Canada, 1979.  相似文献   

17.
Existing conjugate gradient (CG)-based methods for convex quadratic programs with bound constraints require many iterations for solving elastic contact problems. These algorithms are too cautious in expanding the active set and are hampered by frequent restarting of the CG iteration. We propose a new algorithm called the Bound-Constrained Conjugate Gradient method (BCCG). It combines the CG method with an active-set strategy, which truncates variables crossing their bounds and continues (using the Polak–Ribière formula) instead of restarting CG. We provide a case with n=3 that demonstrates that this method may fail on general cases, but we conjecture that it always works if the system matrix A is non-negative. Numerical results demonstrate the effectiveness of the method for large-scale elastic contact problems.  相似文献   

18.
In this work we present and analyze a new scaled conjugate gradient algorithm and its implementation, based on an interpretation of the secant equation and on the inexact Wolfe line search conditions. The best spectral conjugate gradient algorithm SCG by Birgin and Martínez (2001), which is mainly a scaled variant of Perry’s (1977), is modified in such a manner to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded in the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative manner by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Preliminary computational results, for a set consisting of 500 unconstrained optimization test problems, show that this new scaled conjugate gradient algorithm substantially outperforms the spectral conjugate gradient SCG algorithm. The author was awarded the Romanian Academy Grant 168/2003.  相似文献   

19.
A three-parameter family of nonlinear conjugate gradient methods   总被引:3,自引:0,他引:3  

In this paper, we propose a three-parameter family of conjugate gradient methods for unconstrained optimization. The three-parameter family of methods not only includes the already existing six practical nonlinear conjugate gradient methods, but subsumes some other families of nonlinear conjugate gradient methods as its subfamilies. With Powell's restart criterion, the three-parameter family of methods with the strong Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the three-parameter family of methods. This paper can also be regarded as a brief review on nonlinear conjugate gradient methods.

  相似文献   


20.
This paper deals with the numerical implementation of the exact boundary controllability of the Reissner model for shallow spherical shells (Ref. 1). The problem is attacked by the Hilbert uniqueness method (HUM, Refs. 2–4), and we propose a semidiscrete method for the numerical approximation of the minimization problem associated to the exact controllability problem. The numerical results compare well with the results obtained by a finite difference and conjugate gradient method in Ref. 5.This work was done when the first two authors were at CNR-IAC, Rome, Italy as Graduate Students.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号