首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
A subspace projected conjugate gradient method is proposed for solving large bound constrained quadratic programming. The conjugate gradient method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At every iterative level, the search direction consists of two parts, one of which is a subspace trumcated Newton direction, another is a modified gradient direction. With the projected search the algorithm is suitable to large problems. The convergence of the method is proved and same numerical tests with dimensions ranging from 5000 to 20000 are given.  相似文献   

2.
In this paper, we present a conjugate gradient method for solving unconstrained optimization problems. Motivated by Perry conjugate gradient method and Dai-Liao method, an improved Perry update matrix is proposed to overcome the non-symmetric positive definite property of the Perry matrix. The parameter in the update matrix is determined by minimizing the condition number of the iterative matrix which can ensure the positive definite property. The obtained method can also be considered as a modified form of CG-DESCENT method with an adjusted term. Under some mild conditions, the presented method is global convergent. Numerical experiments under CUTEst environment show that the proposed algorithm is promising.  相似文献   

3.
In this work, we present a new hybrid conjugate gradient method based on the approach of the convex hybridization of the conjugate gradient update parameters of DY and HS+, adapting a quasi-Newton philosophy. The computation of the hybrization parameter is obtained by minimizing the distance between the hybrid conjugate gradient direction and the self-scaling memoryless BFGS direction. Furthermore, a significant property of our proposed method is that it ensures sufficient descent independent of the accuracy of the line search. The global convergence of the proposed method is established provided that the line search satisfies the Wolfe conditions. Our numerical experiments on a set of unconstrained optimization test problems from the CUTEr collection indicate that our proposed method is preferable and in general superior to classic conjugate gradient methods in terms of efficiency and robustness.  相似文献   

4.
Minimizing the distance between search direction matrix of the Dai–Liao method and the scaled memoryless BFGS update in the Frobenius norm, and using Powell’s nonnegative restriction of the conjugate gradient parameters, a one-parameter class of nonlinear conjugate gradient methods is proposed. Then, a brief global convergence analysis is made with and without convexity assumption on the objective function. Preliminary numerical results are reported; they demonstrate a proper choice for the parameter of the proposed class of conjugate gradient methods may lead to promising numerical performance.  相似文献   

5.
The main purpose of this paper is to provide a restarting direction for improving on the standard conjugate gradient method.If a drastic non-quadratic behaviour of the objective function is observed in the neighbour of xk,then a restart should be done.The scaling symmetric rank-one update with Davidon's optimal criterion is applied to generate the restarting direction.It is proved that the conjugate gradient method with this strategy retains the quadratic termination.Numerical experiments show that it is successful.  相似文献   

6.
The paper compares a factorized sparse quasi-Newton update of Goldfarb with a nonfactorized BFGS sparse update of Shanno on a series of test problems, with numerical results strongly favoring the unfactorized update. Analysis of Goldfarb's method is done to explain the poor numerical performance. Two specific conjugate gradient methods for solving the required systems of linear equations with the unfactorized update are described and tested.This research was supported by the National Science Foundation under Grant No. MCS-77-07327  相似文献   

7.
Efficient hybrid conjugate gradient techniques   总被引:23,自引:0,他引:23  
Descent property and global convergence proofs are given for a new hybrid conjugate gradient algorithm. Computational results for this algorithm are also given and compared with those of the Fletcher-Reeves method and the Polak-Ribière method, showing a considerable improvement over the latter two methods. We also give new criteria for restarting conjugate gradient algorithms that prove to be computationally very efficient. These criteria provide a descent property and global convergence for any conjugate gradient algorithm using a nonnegative update .  相似文献   

8.
In the paper, we propose an active set identification technique which accurately identifies active constraints in a neighborhood of an isolated stationary point without strict complementarity conditions. Based on the identification technique, we propose a conjugate gradient algorithm for large-scale bound constrained optimization. In the algorithm, the recently developed modified Polak-Ribiére-Polyak method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. Under appropriate conditions, we show that the proposed method is globally convergent. Numerical experiments are presented using bound constrained problems in the CUTEr test problem library.  相似文献   

9.
梯度法简述     
孙聪  张亚 《运筹学学报》2021,25(3):119-132
梯度法是一类求解优化问题的一阶方法。梯度法形式简单、计算开销小,在大规模问题的求解中得到了广泛应用。系统地介绍了光滑无约束问题梯度法的迭代格式、理论框架。梯度法中最重要的参数是步长,步长的选取直接决定了梯度法的收敛性质与收敛速度。从线搜索框架、近似技巧、随机技巧和交替重复步长四方面介绍了梯度步长的构造思想及相应梯度法的收敛性结果,还对非光滑及约束问题的梯度法、梯度法加速技巧和随机梯度法等扩展方向做了简要介绍。  相似文献   

10.
1. IntroductionConsider the following linearly constrained nonlinear programming problemwhere x e R", A E Rmxn and f E C2. We are interested in the case when n and m arelarge and when the Hessian matrix of f is difficult to compute or is dense. It is ajssumed thatA is a matrix of full row rank and that the level set S(xo) = {x: f(x) 5 f(xo), Ax ~ b} isnonempty and compact.In the past few years j there were two kinds of methods for solving the large-scaleproblem (1.1). FOr the one kind, pr…  相似文献   

11.
The conditions under which Huang's conjugate gradient method generates descent directions are given and discussed. Bounds for the condition number of the inverse Hessian matrix are estimated for the case of a symmetric update.The author is much indebted to Professor C. G. Broyden of the Essex University Computing Center, for valuable advice and criticism; he is also grateful to Drs. J. Greenstadt and A. R. Gourlay for having sent copies of their unpublished papers.  相似文献   

12.
王丽平  陈晓红 《计算数学》2009,31(2):127-136
左共轭梯度法是求解大型稀疏线性方程组的一种新兴的Krylov子空间方法.为克服该算法数值表现不稳定、迭代中断的缺点,本文对原方法进行等价变形,得到左共轭梯度方向的另一迭代格式,给出一个拟极小化左共轭梯度算法.数值结果证实了该变形算法与原算法的相关性.  相似文献   

13.
In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one.  相似文献   

14.
In order to propose a scaled conjugate gradient method, the memoryless BFGS preconditioned conjugate gradient method suggested by Shanno and the spectral conjugate gradient method suggested by Birgin and Martínez are hybridized following Andrei’s approach. Since the proposed method is designed based on a revised form of a modified secant equation suggested by Zhang et al., one of its interesting features is applying the available function values in addition to the gradient values. It is shown that, for the uniformly convex objective functions, search directions of the method fulfill the sufficient descent condition which leads to the global convergence. Numerical comparisons of the implementations of the method and an efficient scaled conjugate gradient method proposed by Andrei, made on a set of unconstrained optimization test problems of the CUTEr collection, show the efficiency of the proposed modified scaled conjugate gradient method in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

15.
针对共轭梯度法求解无约束二次凸规划时,在构造共轭方向上的局限性,对共轭梯度法进行了改进.给出了构造共轭方向的新方法,利用数学归纳法对新方法进行了证明.同时还给出了改进共轭梯度法在应用时的基本计算过程,并对方法的收敛性进行了证明.通过实例求解,说明了在求解二次无约束凸规划时,该方法相比共轭梯度法具有一定的优势.  相似文献   

16.
1 引言 考虑无约束最优化问题minf(x)(1.1)  相似文献   

17.
By means of a conjugate gradient strategy, we propose a trust region method for unconstrained optimization problems. The search direction is an adequate combination of the conjugate gradient direction and the trust-region direction. The global convergence and the quadratic convergence of this method are established under suitable conditions. Numerical results show that the presented method is competitive to the trust region method and the conjugate gradient method.  相似文献   

18.
Following the approach proposed by Dai and Liao, we introduce two nonlinear conjugate gradient methods for unconstrained optimization problems. One of our proposed methods is based on a modified version of the secant equation proposed by Zhang, Deng and Chen, and Zhang and Xu, and the other is based on the modified BFGS update proposed by Yuan. An interesting feature of our methods is their account of both the gradient and function values. Under proper conditions, we show that one of the proposed methods is globally convergent for general functions and that the other is globally convergent for uniformly convex functions. To enhance the performance of the line search procedure, we also propose a new approach for computing the initial steplength to be used for initiating the procedure. We provide a comparison of implementations of our methods with the efficient conjugate gradient methods proposed by Dai and Liao, and Hestenes and Stiefel. Numerical test results show the efficiency of our proposed methods.  相似文献   

19.
本文对求解无约束优化问题提出一类三项混合共轭梯度算法,新算法将Hestenes- stiefel算法与Dai-Yuan方法相结合,并在不需给定下降条件的情况下,证明了算法在Wolfe线搜索原则下的收敛性,数值试验亦显示出这种混合共轭梯度算法较之HS和PRP的优势.  相似文献   

20.
In this paper, we introduce a cautious BFGS (CBFGS) update criterion in the reduced Hessian sequential quadratic programming (SQP) method. An attractive property of this update criterion is that the generated iterative matrices are always positive definite. Under mild conditions, we get the global convergence of the reduced Hessian SQP method. In particular, the second order sufficient condition is not necessary for the global convergence of the method. Furthermore, we show that if the second order sufficient condition holds at an accumulation point, then the reduced Hessian SQP method with CBFGS update reduces to the reduced Hessian SQP method with ordinary BFGS update. Consequently, the local behavior of the proposed method is the same as the reduced Hessian SQP method with BFGS update. The presented preliminary numerical experiments show the good performance of the method. This work was supported by the National Natural Science Foundation of China via grant 10671060 and 10471060.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号