首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Conjugate gradient methods are a class of important methods for unconstrained optimization, especially when the dimension is large. In 2001, Dai and Liao have proposed a new conjugate condition, based on it two nonlinear conjugate gradient methods are constructed. With trust region idea, this paper gives a self-adaptive technique for the two methods. The numerical results show that this technique works well for the given nonlinear optimization test problems.  相似文献   

2.
A class of globally convergent conjugate gradient methods   总被引:4,自引:0,他引:4  
Conjugate gradient methods are very important ones for solving nonlinear optimization problems, especially for large scale problems. However, unlike quasi-Newton methods, conjugate gradient methods were usually analyzed individually. In this paper, we propose a class of conjugate gradient methods, which can be regarded as some kind of convex combination of the Fletcher-Reeves method and the method proposed by Dai et al. To analyze this class of methods, we introduce some unified tools that concern a general method with the scalar βk having the form of φk/φk-1. Consequently, the class of conjugate gradient methods can uniformly be analyzed.  相似文献   

3.
AbstractA class of regularized conjugate gradient methods is presented for solving the large sparse system of linear equations of which the coefficient matrix is an ill-conditioned symmetric positive definite matrix. The convergence properties of these methods are discussed in depth, and the best possible choices of the parameters involved in the new methods are investigated in detail. Numerical computations show that the new methods are more efficient and robust than both classical relaxation methods and classical conjugate direction methods.  相似文献   

4.
In this work we introduce two new Barzilai and Borwein-like steps sizes for the classical gradient method for strictly convex quadratic optimization problems.The proposed step sizes employ second-order information in order to obtain faster gradient-type methods.Both step sizes are derived from two unconstrained optimization models that involve approximate information of the Hessian of the objective function.A convergence analysis of the proposed algorithm is provided.Some numerical experiments are performed in order to compare the efficiency and effectiveness of the proposed methods with similar methods in the literature.Experimentally,it is observed that our proposals accelerate the gradient method at nearly no extra computational cost,which makes our proposal a good alternative to solve large-scale problems.  相似文献   

5.
We study a class of preconditioners to solve large-scale linear systems arising from fully implicit reservoir simulation.These methods are discussed in the framework of the auxiliary space preconditioning method for generality.Unlike in the case of classical algebraic preconditioning methods,we take several analytical and physical considerations into account.In addition,we choose appropriate auxiliary problems to design the robust solvers herein.More importantly,our methods are user-friendly and general enough to be easily ported to existing petroleum reservoir simulators.We test the efciency and robustness of the proposed method by applying them to a couple of benchmark problems and real-world reservoir problems.The numerical results show that our methods are both efcient and robust for large reservoir models.  相似文献   

6.
Inspired by the success of the projected Barzilai-Borwein (PBB) method for large-scale box-constrained quadratic programming, we propose and analyze the monotone projected gradient methods in this paper. We show by experiments and analyses that for the new methods, it is generally a bad option to compute steplengths based on the negative gradients. Thus in our algorithms, some continuous or discontinuous projected gradients are used instead to compute the steplengths. Numerical experiments on a wide variety of test problems are presented, indicating that the new methods usually outperform the PBB method.  相似文献   

7.
An adaptive multi-scale conjugate gradient method for distributed parameter estimations (or inverse problems) of wave equation is presented. The identification of the coefficients of wave equations in two dimensions is considered. First, the conjugate gradient method for optimization is adopted to solve the inverse problems. Second, the idea of multi-scale inversion and the necessary conditions that the optimal solution should be the fixed point of multi-scale inversion method is considered. An adaptive multi-scale inversion method for the inoerse problem is developed in conjunction with the conjugate gradient method. Finally, some numerical results are shown to indicate the robustness and effectiveness of our method.  相似文献   

8.
A NOTE ON THE NONLINEAR CONJUGATE GRADIENT METHOD   总被引:2,自引:0,他引:2  
The conjugate gradient method for unconstrained optimization problems varies with a scalar. In this note, a general condition concerning the scalar is given, which ensures the global convergence of the method in the case of strong Wolfe line searches. It is also discussed how to use the result to obtain the convergence of the famous Fletcher-Reeves, and Polak-Ribiere-Polyak conjugate gradient methods. That the condition cannot be relaxed in some sense is mentioned.  相似文献   

9.
孙清滢 《数学季刊》2003,18(2):154-162
Conjugate gradient optimization algorithms depend on the search directions.with different choices for the parameters in the search directions.In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991),a class of new restarting conjugate gradient methods is presented.Global convergences of the new method with two kinds of common line searches,are proved .Firstly,it is shown that,using reverse modulus of continuity funciton and forcing function,the new method for solving unconstrained optimization can work for a continously differentiable function with Curry-Altman‘s step size rule and a bounded level set .Secondly,by using comparing technique,some general convergence propecties of the new method with other kind of step size rule are established,Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.  相似文献   

10.
<正>A modified polynomial preserving gradient recovery technique is proposed. Unlike the polynomial preserving gradient recovery technique,the gradient recovered with the modified polynomial preserving recovery(MPPR) is constructed element-wise, and it is discontinuous across the interior edges.One advantage of the MPPR technique is that the implementation is easier when adaptive meshes are involved.Superconvergence results of the gradient recovered with MPPR are proved for finite element methods for elliptic boundary problems and eigenvalue problems under adaptive meshes. The MPPR is applied to adaptive finite element methods to construct asymptotic exact a posteriori error estimates.Numerical tests are provided to examine the theoretical results and the effectiveness of the adaptive finite element algorithms.  相似文献   

11.
Mathematical programming is a rich and well-developed area in operations research. Nevertheless, there remain many challenging problems in this area, one of which is the large-scale optimization problem. In this article, a modified Hestenes and Stiefel (HS) conjugate gradient (CG) algorithm with a nonmonotone line search technique is presented. This algorithm possesses information about not only the gradient value but also the function value. Moreover, the sufficient descent condition holds without any line search. The global convergence is established for nonconvex functions under suitable conditions. Numerical results show that the proposed algorithm is advantageous to existing CG methods for large-scale optimization problems.  相似文献   

12.
《Optimization》2012,61(4):993-1009
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, we propose a new two-parameter family of conjugate gradient methods for unconstrained optimization. The two-parameter family of methods not only includes the already existing three practical nonlinear conjugate gradient methods, but has other family of conjugate gradient methods as subfamily. The two-parameter family of methods with the Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the two-parameter family of methods. The numerical results show that this method is efficient for the given test problems. In addition, the methods related to this family are uniformly discussed.  相似文献   

13.
Existing conjugate gradient (CG)-based methods for convex quadratic programs with bound constraints require many iterations for solving elastic contact problems. These algorithms are too cautious in expanding the active set and are hampered by frequent restarting of the CG iteration. We propose a new algorithm called the Bound-Constrained Conjugate Gradient method (BCCG). It combines the CG method with an active-set strategy, which truncates variables crossing their bounds and continues (using the Polak–Ribière formula) instead of restarting CG. We provide a case with n=3 that demonstrates that this method may fail on general cases, but we conjecture that it always works if the system matrix A is non-negative. Numerical results demonstrate the effectiveness of the method for large-scale elastic contact problems.  相似文献   

14.
共轭梯度法是一类具有广泛应用的求解大规模无约束优化问题的方法. 提出了一种新的非线性共轭梯度(CG)法,理论分析显示新算法在多种线搜索条件下具有充分下降性. 进一步证明了新CG算法的全局收敛性定理. 最后,进行了大量数值实验,其结果表明与传统的几类CG方法相比,新算法具有更为高效的计算性能.  相似文献   

15.
In this paper, we deal with conjugate gradient methods for solving nonlinear least squares problems. Several Newton-like methods have been studied for solving nonlinear least squares problems, which include the Gauss-Newton method, the Levenberg-Marquardt method and the structured quasi-Newton methods. On the other hand, conjugate gradient methods are appealing for general large-scale nonlinear optimization problems. By combining the structured secant condition and the idea of Dai and Liao (2001) [20], the present paper proposes conjugate gradient methods that make use of the structure of the Hessian of the objective function of nonlinear least squares problems. The proposed methods are shown to be globally convergent under some assumptions. Finally, some numerical results are given.  相似文献   

16.
《Optimization》2012,61(7):929-941
To take advantage of the attractive features of the Hestenes–Stiefel and Dai–Yuan conjugate gradient (CG) methods, we suggest a hybridization of these methods using a quadratic relaxation of a hybrid CG parameter proposed by Dai and Yuan. In the proposed method, the hybridization parameter is computed based on a conjugacy condition. Under proper conditions, we show that our method is globally convergent for uniformly convex functions. We give a numerical comparison of the implementations of our method and two efficient hybrid CG methods proposed by Dai and Yuan using a set of unconstrained optimization test problems from the CUTEr collection. Numerical results show the efficiency of the proposed hybrid CG method in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

17.
A new family of conjugate gradient methods   总被引:1,自引:0,他引:1  
In this paper we develop a new class of conjugate gradient methods for unconstrained optimization problems. A new nonmonotone line search technique is proposed to guarantee the global convergence of these conjugate gradient methods under some mild conditions. In particular, Polak–Ribiére–Polyak and Liu–Storey conjugate gradient methods are special cases of the new class of conjugate gradient methods. By estimating the local Lipschitz constant of the derivative of objective functions, we can find an adequate step size and substantially decrease the function evaluations at each iteration. Numerical results show that these new conjugate gradient methods are effective in minimizing large-scale non-convex non-quadratic functions.  相似文献   

18.
Conjugate gradient methods have been paid attention to, because they can be directly applied to large-scale unconstrained optimization problems. In order to incorporate second order information of the objective function into conjugate gradient methods, Dai and Liao (2001) proposed a conjugate gradient method based on the secant condition. However, their method does not necessarily generate a descent search direction. On the other hand, Hager and Zhang (2005) proposed another conjugate gradient method which always generates a descent search direction.  相似文献   

19.
共轭梯度法是求解大规模无约束优化问题最有效的方法之一.对HS共轭梯度法参数公式进行改进,得到了一个新公式,并以新公式建立一个算法框架.在不依赖于任何线搜索条件下,证明了由算法框架产生的迭代方向均满足充分下降条件,且在标准Wolfe线搜索条件下证明了算法的全局收敛性.最后,对新算法进行数值测试,结果表明所改进的方法是有效的.  相似文献   

20.
In this paper, an improved spectral conjugate gradient algorithm is developed for solving nonconvex unconstrained optimization problems. Different from the existent methods, the spectral and conjugate parameters are chosen such that the obtained search direction is always sufficiently descent as well as being close to the quasi-Newton direction. With these suitable choices, the additional assumption in the method proposed by Andrei on the boundedness of the spectral parameter is removed. Under some mild conditions, global convergence is established. Numerical experiments are employed to demonstrate the efficiency of the algorithm for solving large-scale benchmark test problems, particularly in comparison with the existent state-of-the-art algorithms available in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号