首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 593 毫秒
1.
推广AS-GN混合共轭梯度算法   总被引:2,自引:0,他引:2  
闫晖  陈兰平 《运筹学学报》2010,14(3):122-128
本文提出了一种求解无约束优化问题的新算法,使Touati-Ahmed, Storey提出的混合共轭梯度法(以下简称AS)和Gilbert, Nocedal提出的混合共轭梯度法(以下简称GN)成为新算法在精确线性搜索下的特例.通过构造新的$\beta_{k}$计算公式,新算法自然满足下降性条件,且这个性质与线性搜索和目标函数的凸性均无关.在一般的条件下,我们证明了新算法的全局收敛性.数值结果表明该算法对测试函数是有效的.  相似文献   

2.
对闭凸集约束的非线性规划问题构造了一个修正共轭梯度投影下降算法,在去掉迭代点列有界的条件下,分析了算法的全局收敛性.新算法与共轭梯度参数结合,给出了三类结合共轭梯度参数的修正共轭梯度投影算法.数值例子表明算法是有效的.  相似文献   

3.
本文提出了一种新的求解无约束优化问题的混合共轭梯度算法.通过构造新的β_k公式,并由此提出一个不同于传统方式的确定搜索方向的方法,使得新算法不但能自然满足下降性条件,而且这个性质与线性搜索和目标函数的凸性均无关.在较弱的条件下,我们证明了新算法的全局收敛性.数值结果亦表明了该算法的有效性.  相似文献   

4.
给求解无约束规划问题的记忆梯度算法中的参数一个特殊取法,得到目标函数的记忆梯度G o ldste in-L av in tin-Po lyak投影下降方向,从而对凸约束的非线性规划问题构造了一个记忆梯度G o ldste in-L av in tin-Po lyak投影算法,并在一维精确步长搜索和去掉迭代点列有界的条件下,分析了算法的全局收敛性,得到了一些较为深刻的收敛性结果.同时给出了结合FR,PR,HS共轭梯度算法的记忆梯度G o ldste in-L av in tin-Po lyak投影算法,从而将经典共轭梯度算法推广用于求解凸约束的非线性规划问题.数值例子表明新算法比梯度投影算法有效.  相似文献   

5.
针对共轭梯度法求解无约束二次凸规划时,在构造共轭方向上的局限性,对共轭梯度法进行了改进.给出了构造共轭方向的新方法,利用数学归纳法对新方法进行了证明.同时还给出了改进共轭梯度法在应用时的基本计算过程,并对方法的收敛性进行了证明.通过实例求解,说明了在求解二次无约束凸规划时,该方法相比共轭梯度法具有一定的优势.  相似文献   

6.
线性约束最优化的一个共轭投影梯度法   总被引:1,自引:0,他引:1  
本结合共轭梯度法及梯度投影法的思想,建立线性等式约束最优化的一个新算法,称之为共轭投影梯度法。分别对二次凸目标函数和一般目标函数分析和论证了算法的重要性质和收敛性。  相似文献   

7.
混合共轭梯度法是一个改进的新共轭梯度法,有着比较好的数值表现.在Jia提出的混合共轭梯度法基础上,建立了一个新的具有充分下降性的混合共轭梯度算法;并证明了该算法在强Wolfe型线搜索下具有全局收敛性.数值实验结果表明该算法是有效的.  相似文献   

8.
共轭梯度法是一类具有广泛应用的求解大规模无约束优化问题的方法. 提出了一种新的非线性共轭梯度(CG)法,理论分析显示新算法在多种线搜索条件下具有充分下降性. 进一步证明了新CG算法的全局收敛性定理. 最后,进行了大量数值实验,其结果表明与传统的几类CG方法相比,新算法具有更为高效的计算性能.  相似文献   

9.
共轭梯度法是一类具有广泛应用的求解大规模无约束优化问题的方法.提出了一种新的非线性共轭梯度(CG)法,理论分析显示新算法在多种线搜索条件下具有充分下降性.进一步证明了新CG算法的全局收敛性定理.最后,进行了大量数值实验,其结果表明与传统的几类CG方法相比,新算法具有更为高效的计算性能.  相似文献   

10.
针对无约束优化问题,通过修正共轭梯度参数,构造新的搜索方向,提出两类修正的WYL共轭梯度法.在每次迭代过程中,两类算法产生的搜索方向均满足充分下降性.在适当条件下,证明了算法的全局收敛性.数值结果表明算法是可行的和有效的.  相似文献   

11.
In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one.  相似文献   

12.
Jiang  Xianzhen  Liao  Wei  Yin  Jianghua  Jian  Jinbao 《Numerical Algorithms》2022,91(1):161-191

In this paper, based on the hybrid conjugate gradient method and the convex combination technique, a new family of hybrid three-term conjugate gradient methods are proposed for solving unconstrained optimization. The conjugate parameter in the search direction is a hybrid of Dai-Yuan conjugate parameter and any one. The search direction then is the sum of the negative gradient direction and a convex combination in relation to the last search direction and the gradient at the previous iteration. Without choosing any specific conjugate parameters, we show that the search direction generated by the family always possesses the descent property independent of line search technique, and that it is globally convergent under usual assumptions and the weak Wolfe line search. To verify the effectiveness of the presented family, we further design a specific conjugate parameter, and perform medium-large-scale numerical experiments for smooth unconstrained optimization and image restoration problems. The numerical results show the encouraging efficiency and applicability of the proposed methods even compared with the state-of-the-art methods.

  相似文献   

13.
In this work we present and analyze a new scaled conjugate gradient algorithm and its implementation, based on an interpretation of the secant equation and on the inexact Wolfe line search conditions. The best spectral conjugate gradient algorithm SCG by Birgin and Martínez (2001), which is mainly a scaled variant of Perry’s (1977), is modified in such a manner to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded in the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative manner by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Preliminary computational results, for a set consisting of 500 unconstrained optimization test problems, show that this new scaled conjugate gradient algorithm substantially outperforms the spectral conjugate gradient SCG algorithm. The author was awarded the Romanian Academy Grant 168/2003.  相似文献   

14.
A new algorithm, the dual active set algorithm, is presented for solving a minimization problem with equality constraints and bounds on the variables. The algorithm identifies the active bound constraints by maximizing an unconstrained dual function in a finite number of iterations. Convergence of the method is established, and it is applied to convex quadratic programming. In its implementable form, the algorithm is combined with the proximal point method. A computational study of large-scale quadratic network problems compares the algorithm to a coordinate ascent method and to conjugate gradient methods for the dual problem. This study shows that combining the new algorithm with the nonlinear conjugate gradient method is particularly effective on difficult network problems from the literature.  相似文献   

15.
In this paper we propose a fundamentally different conjugate gradient method, in which the well-known parameter βk is computed by an approximation of the Hessian/vector product through finite differences. For search direction computation, the method uses a forward difference approximation to the Hessian/vector product in combination with a careful choice of the finite difference interval. For the step length computation we suggest an acceleration scheme able to improve the efficiency of the algorithm. Under common assumptions, the method is proved to be globally convergent. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with conjugate gradient algorithms including CONMIN by Shanno and Phua [D.F. Shanno, K.H. Phua, Algorithm 500, minimization of unconstrained multivariate functions, ACM Trans. Math. Softw. 2 (1976) 87–94], SCALCG by Andrei [N. Andrei, Scaled conjugate gradient algorithms for unconstrained optimization, Comput. Optim. Appl. 38 (2007) 401–416; N. Andrei, Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Optim. Methods Softw. 22 (2007) 561–571; N. Andrei, A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Appl. Math. Lett. 20 (2007) 645–650], and new conjugacy condition and related new conjugate gradient by Li, Tang and Wei [G. Li, C. Tang, Z. Wei, New conjugacy condition and related new conjugate gradient methods for unconstrained optimization, J. Comput. Appl. Math. 202 (2007) 523–539] or truncated Newton TN by Nash [S.G. Nash, Preconditioning of truncated-Newton methods, SIAM J. on Scientific and Statistical Computing 6 (1985) 599–616] using a set of 750 unconstrained optimization test problems show that the suggested algorithm outperforms these conjugate gradient algorithms as well as TN.  相似文献   

16.
In this paper, we suggest another accelerated conjugate gradient algorithm for which both the descent and the conjugacy conditions are guaranteed. The search direction is selected as a linear combination of the gradient and the previous direction. The coefficients in this linear combination are selected in such a way that both the descent and the conjugacy condition are satisfied at every iteration. The algorithm introduces the modified Wolfe line search, in which the parameter in the second Wolfe condition is modified at every iteration. It is shown that both for uniformly convex functions and for general nonlinear functions, the algorithm with strong Wolfe line search generates directions bounded away from infinity. The algorithm uses an acceleration scheme modifying the step length in such a manner as to improve the reduction of the function values along the iterations. Numerical comparisons with some conjugate gradient algorithms using a set of 75 unconstrained optimization problems with different dimensions show that the computational scheme outperforms the known conjugate gradient algorithms like Hestenes and Stiefel; Polak, Ribière and Polyak; Dai and Yuan or the hybrid Dai and Yuan; CG_DESCENT with Wolfe line search, as well as the quasi-Newton L-BFGS.  相似文献   

17.
连淑君  王长钰 《应用数学》2007,20(1):120-127
本文我们讨论了一簇共轭梯度法,它可被看作是FR法和DY法的凸组合.我们提出了两种Armijo型线搜索,并在这两种线搜索下,讨论了共轭梯度法簇的全局收敛性.  相似文献   

18.
在K-SVCR算法结构的基础上构造了新的模型.模型的特点是它的一阶最优化条件可以转化为一个线性互补问题,通过Lagrangian隐含数,可以将其进一步转化成一个强凸的无约束优化问题.利用共轭梯度技术对其进行求解,在有限步内得到分类超平面.最后在标准数据集进行了初步试验.试验结果显示了提出的算法在分类的精度和速度上都有明显提高.  相似文献   

19.
Satisfying in the sufficient descent condition is a strength of a conjugate gradient method. Here, it is shown that under the Wolfe line search conditions the search directions generated by the memoryless BFGS conjugate gradient algorithm proposed by Shanno satisfy the sufficient descent condition for uniformly convex functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号