首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 78 毫秒
1.
共轭梯度法是求解大规模元约束优化同题的一种有效方法,本文提出一种新的共轭梯度法,证明了在推广的Wolfe线搜索条件下方法具有全局收敛性。最后对算法进行了数值试验,试验结果表明该算法具有良好的收敛性和有效性。  相似文献   

2.
对求解无约束规划的超记忆梯度算法中线搜索方向中的参数,给了一个假设条件,从而确定了它的一个新的取值范围,保证了搜索方向是目标函数的充分下降方向,由此提出了一类新的记忆梯度算法.在去掉迭代点列有界和Armijo步长搜索下,讨论了算法的全局收敛性,且给出了结合形如共轭梯度法FR,PR,HS的记忆梯度法的修正形式.数值实验表明,新算法比Armijo线搜索下的FR、PR、HS共轭梯度法和超记忆梯度法更稳定、更有效.  相似文献   

3.
孙清滢 《数学进展》2004,33(5):598-606
利用Rosen投影矩阵,建立求解带线性或非线性不等式约束优化问题的三项记忆梯度Rosen投影下降算法,并证明了算法的收敛性.同时给出了结合FR,PR,HS共轭梯度参数的三项记忆梯度Rosen投影算法,从而将经典的共轭梯度法推广用于求解约束规划问题.数值例子表明算法是有效的。  相似文献   

4.
混合共轭梯度法是一个改进的新共轭梯度法,有着比较好的数值表现.在Jia提出的混合共轭梯度法基础上,建立了一个新的具有充分下降性的混合共轭梯度算法;并证明了该算法在强Wolfe型线搜索下具有全局收敛性.数值实验结果表明该算法是有效的.  相似文献   

5.
一种混合的HS-DY共轭梯度法   总被引:19,自引:3,他引:19  
戴志锋  陈兰平 《计算数学》2005,27(4):429-436
本文在HS方法和DY方法的基础上,综合两者的优势,提出了一种求解无约束优化问题的新的混合共轭梯度法.在Wolfe线搜索下,不需给定下降条件,证明了算法的全局收敛性.数值试验表明,新算法较之HS方法和PR方法更加有效.  相似文献   

6.
精确搜索下的非线性共轭梯度法   总被引:4,自引:0,他引:4       下载免费PDF全文
该文提出一种无约束优化非线性共轭梯度法,证明了精确线性 搜索下的全局收敛性。当目标函数为一致凸函数时,证明了算法具有线性收敛速度。数值实验表明算法对于求解实际问题是有效的。  相似文献   

7.
通过求解带有罚参数的优化问题设计共轭梯度法是一种新思路.基于Fatemi的优化问题求解,通过估计步长和选择合适的罚参数建立一个谱三项共轭梯度法,为证得算法的全局收敛性对谱参数进行修正.在标准Wolfe线搜索下证明了该谱三项共轭梯度算法的充分下降性以及全局收敛性.最后,在选取相同算例的多个算法测试结果中表明新方法数值试验性能表现良好.  相似文献   

8.
给求解无约束规划问题的记忆梯度算法中的参数一个特殊取法,得到目标函数的记忆梯度G o ldste in-L av in tin-Po lyak投影下降方向,从而对凸约束的非线性规划问题构造了一个记忆梯度G o ldste in-L av in tin-Po lyak投影算法,并在一维精确步长搜索和去掉迭代点列有界的条件下,分析了算法的全局收敛性,得到了一些较为深刻的收敛性结果.同时给出了结合FR,PR,HS共轭梯度算法的记忆梯度G o ldste in-L av in tin-Po lyak投影算法,从而将经典共轭梯度算法推广用于求解凸约束的非线性规划问题.数值例子表明新算法比梯度投影算法有效.  相似文献   

9.
梯度投影法是一类有效的约束最优化算法,在最优化领域中占有重要的地位.但是,梯度投影法所采用的投影是正交投影,不包含目标函数和约束函数的二阶导数信息·因而;收敛速度不太令人满意.本文介绍一种共轭投影概念,利用共轭投影构造了一般线性或非线性约束下的共轭投影变尺度算法,并证明了算法在一定条件下具有全局收敛性.由于算法中的共轭投影恰当地包含了目标函数和约束函数的二阶导数信息,因而收敛速度有希望加快.数值试验的结果表明算法是有效的.  相似文献   

10.
对闭凸集约束的非线性规划问题构造了一个修正共轭梯度投影下降算法,在去掉迭代点列有界的条件下,分析了算法的全局收敛性.新算法与共轭梯度参数结合,给出了三类结合共轭梯度参数的修正共轭梯度投影算法.数值例子表明算法是有效的.  相似文献   

11.
In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of FR method,a class of new restarting three terms conjugate gradient methods is presented.Global convergence properties of the new method with two kinds of common line searches are proved.  相似文献   

12.
Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameter in the search directions. In this note, conditions are given on the parameter in the conjugate gradient directions to ensure the descent property of the search directions. Global convergence of such a class of methods is discussed. It is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continuously differentiable function with a modification of the Curry-Altman‘s step-size rule and a bounded level set. Combining PR method with our new method, PR method is modified to have global convergence property.Numerical experiments show that the new methods are efficient by comparing with FR conjugate gradient method.  相似文献   

13.
利用广义投影矩阵,对求解无约束规划的三项记忆梯度算法中的参数给一条件,确定它们的取值范围,以保证得到目标函数的三项记忆梯度广义投影下降方向,建立了求解非线性等式和不等式约束优化问题的三项记忆梯度广义投影算法,并证明了算法的收敛性.同时给出了结合FR,PR,HS共轭梯度参数的三项记忆梯度广义投影算法,从而将经典的共轭梯度算法推广用于求解约束规划问题.数值例子表明算法是有效的.  相似文献   

14.
本文在很弱的条件下得到了关于无约束最优化的Polak—Ribiere和Hestenes-Stiefel共轭梯度法的全局收敛性的新结果,这里 PR方法和HS方法中的参数β_k~(PR)和β_k~HS可以在某个负的区域内取值,这一负的区域与k有关.这些新的收敛性结果改进了文献中已有的结果.数值检验的结果表明了本文中新的 PR方法和 HS方法是相当有效的.  相似文献   

15.
孙清滢 《数学季刊》2003,18(2):154-162
Conjugate gradient optimization algorithms depend on the search directions.with different choices for the parameters in the search directions.In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991),a class of new restarting conjugate gradient methods is presented.Global convergences of the new method with two kinds of common line searches,are proved .Firstly,it is shown that,using reverse modulus of continuity funciton and forcing function,the new method for solving unconstrained optimization can work for a continously differentiable function with Curry-Altman‘s step size rule and a bounded level set .Secondly,by using comparing technique,some general convergence propecties of the new method with other kind of step size rule are established,Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.  相似文献   

16.
In this paper, we propose a new trust region method for unconstrained optimization problems. The new trust region method can automatically adjust the trust region radius of related subproblems at each iteration and has strong global convergence under some mild conditions. We also analyze the global linear convergence, local superlinear and quadratic convergence rate of the new method. Numerical results show that the new trust region method is available and efficient in practical computation.  相似文献   

17.
We study piecewise decomposition methods for mathematical programs with equilibrium constraints (MPECs) for which all constraint functions are linear. At each iteration of a decomposition method, one step of a nonlinear programming scheme is applied to one piece of the MPEC to obtain the next iterate. Our goal is to understand global convergence to B-stationary points of these methods when the embedded nonlinear programming solver is a trust-region scheme, and the selection of pieces is determined using multipliers generated by solving the trust-region subproblem. To this end we study global convergence of a linear trust-region scheme for linearly-constrained NLPs that we call a trust-search method. The trust-search has two features that are critical to global convergence of decomposition methods for MPECs: a robustness property with respect to switching pieces, and a multiplier convergence result that appears to be quite new for trust-region methods. These combine to clarify and strengthen global convergence of decomposition methods without resorting either to additional conditions such as eventual inactivity of the trust-region constraint, or more complex methods that require a separate subproblem for multiplier estimation.   相似文献   

18.
Stabilized sequential quadratic programming (sSQP) methods for nonlinear optimization generate a sequence of iterates with fast local convergence regardless of whether or not the active-constraint gradients are linearly dependent. This paper concerns the local convergence analysis of an sSQP method that uses a line search with a primal-dual augmented Lagrangian merit function to enforce global convergence. The method is provably well-defined and is based on solving a strictly convex quadratic programming subproblem at each iteration. It is shown that the method has superlinear local convergence under assumptions that are no stronger than those required by conventional stabilized SQP methods. The fast local convergence is obtained by allowing a small relaxation of the optimality conditions for the quadratic programming subproblem in the neighborhood of a solution. In the limit, the line search selects the unit step length, which implies that the method does not suffer from the Maratos effect. The analysis indicates that the method has the same strong first- and second-order global convergence properties that have been established for augmented Lagrangian methods, yet is able to transition seamlessly to sSQP with fast local convergence in the neighborhood of a solution. Numerical results on some degenerate problems are reported.  相似文献   

19.
Since 1965, there has been significant progress in the theoretical study on quasi-Newton methods for solving nonlinear equations, especially in the local convergence analysis. However, the study on global convergence of quasi-Newton methods is relatively fewer, especially for the BFGS method. To ensure global convergence, some merit function such as the squared norm merit function is typically used. In this paper, we propose an algorithm for solving nonlinear monotone equations, which combines the BFGS method and the hyperplane projection method. We also prove that the proposed BFGS method converges globally if the equation is monotone and Lipschitz continuous without differentiability requirement on the equation, which makes it possible to solve some nonsmooth equations. An attractive property of the proposed method is that its global convergence is independent of any merit function.We also report some numerical results to show efficiency of the proposed method.

  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号