首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 93 毫秒
1.
Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameter in the search directions. In this note, conditions are given on the parameter in the conjugate gradient directions to ensure the descent property of the search directions. Global convergence of such a class of methods is discussed. It is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continuously differentiable function with a modification of the Curry-Altman‘s step-size rule and a bounded level set. Combining PR method with our new method, PR method is modified to have global convergence property.Numerical experiments show that the new methods are efficient by comparing with FR conjugate gradient method.  相似文献   

2.
Trust region methods are powerful and effective optimization methods.The conic model method is a new type of method with more information available at each iteration than standard quadratic-based methods.The advantages of the above two methods can be combined to form a more powerful method for constrained optimization.The trust region subproblem of our method is to minimize a conic function subject to the linearized constraints and trust region bound.At the same time,the new algorithm still possesses robust global properties.The global convergence of the new algorithm under standard conditions is established.  相似文献   

3.
孙清滢 《数学季刊》2003,18(2):154-162
Conjugate gradient optimization algorithms depend on the search directions.with different choices for the parameters in the search directions.In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991),a class of new restarting conjugate gradient methods is presented.Global convergences of the new method with two kinds of common line searches,are proved .Firstly,it is shown that,using reverse modulus of continuity funciton and forcing function,the new method for solving unconstrained optimization can work for a continously differentiable function with Curry-Altman‘s step size rule and a bounded level set .Secondly,by using comparing technique,some general convergence propecties of the new method with other kind of step size rule are established,Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.  相似文献   

4.
In this paper, a new class of memoryless non-quasi-Newton method for solving unconstrained optimization problems is proposed, and the global convergence of this method with inexact line search is proved. Furthermore, we propose a hybrid method that mixes both the memoryless non-quasi-Newton method and the memoryless Perry-Shanno quasi-Newton method. The global convergence of this hybrid memoryless method is proved under mild assumptions. The initial results show that these new methods are efficient for the given test problems. Especially the memoryless non-quasi-Newton method requires little storage and computation, so it is able to efficiently solve large scale optimization problems.  相似文献   

5.
In this paper, a new weak condition for the convergence of secant method to solve the systems of nonlinear equations is proposed. A convergence ball with the center x0 is replaced by that with xl, the first approximation generated by the secant method with the initial data x-1 and x0. Under the bounded conditions of the divided difference, a convergence theorem is obtained and two examples to illustrate the weakness of convergence conditions are provided. Moreover, the secant method is applied to a system of nonlinear equations to demonstrate the viability and effectiveness of the results in the paper.  相似文献   

6.
In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of FR method,a class of new restarting three terms conjugate gradient methods is presented.Global convergence properties of the new method with two kinds of common line searches are proved.  相似文献   

7.
A NEW TRUST REGION DOGLEG METHOD FOR UNCONSTRAINED OPTIMIZATION   总被引:1,自引:0,他引:1  
Abstract. This paper presents a new trust region dogleg method for unconstrained optimization.The method can deal with the case when the Hessian B of quadratic models is indefinite. It isproved that the method is globally convergent and has a quadratic convergence rate if Under certain conditions, the solution obtained by the method is even a second order  相似文献   

8.
We propose a nonmonotone adaptive trust region method based on simple conic model for unconstrained optimization. Unlike traditional trust region methods, the subproblem in our method is a simple conic model, where the Hessian of the objective function is approximated by a scalar matrix. The trust region radius is adjusted with a new self-adaptive adjustment strategy which makes use of the information of the previous iteration and current iteration. The new method needs less memory and computational efforts. The global convergence and Q-superlinear convergence of the algorithm are established under the mild conditions. Numerical results on a series of standard test problems are reported to show that the new method is effective and attractive for large scale unconstrained optimization problems.  相似文献   

9.
In this paper, the continuously differentiable optimization problem min{f(x) : x∈Ω}, where Ω ∈ R^n is a nonempty closed convex set, the gradient projection method by Calamai and More (Math. Programming, Vol.39. P.93-116, 1987) is modified by memory gradient to improve the convergence rate of the gradient projection method is considered. The convergence of the new method is analyzed without assuming that the iteration sequence {x^k} of bounded. Moreover, it is shown that, when f(x) is pseudo-convex (quasiconvex) function, this new method has strong convergence results. The numerical results show that the method in this paper is more effective than the gradient projection method.  相似文献   

10.
Trust region(TR)algorithms are a class of recently developed alogrthms for nonlinear optimization.A new family of TR algorithms for unconstrained optimization,which is the extension of the usual TR method,is pressented in this paper.When the objective function is bounded below and continuously differentiable,and the norm of the Hesse approximations increases at most linearly with the iteration number,we prove the global convergence of the algorithms.Limited numerical results are repoted,which indicate that our new TR algorithm is competitive.  相似文献   

11.
本文给出解决两阶段求援随机规划的一种新的数值方法.由于引进了新的逼近技术,该方法具有全局收敛性和局部超线性收敛性.  相似文献   

12.
In this paper, we propose a new trust region method for unconstrained optimization problems. The new trust region method can automatically adjust the trust region radius of related subproblems at each iteration and has strong global convergence under some mild conditions. We also analyze the global linear convergence, local superlinear and quadratic convergence rate of the new method. Numerical results show that the new trust region method is available and efficient in practical computation.  相似文献   

13.
The object of this paper is to construct a new efficient iterative method for solving nonlinear equations. This method is mainly based on Javidi paper [1] by using a new scheme of a modified homotopy perturbation method. This new method is of the fifth order of convergence, and it is compared with the second-, third-, fifth-, and sixth-ordermethods. Some numerical test problems are given to show the accuracy and fast convergence of the method proposed.  相似文献   

14.
A new trust region method with adaptive radius   总被引:2,自引:0,他引:2  
In this paper we develop a new trust region method with adaptive radius for unconstrained optimization problems. The new method can adjust the trust region radius automatically at each iteration and possibly reduces the number of solving subproblems. We investigate the global convergence and convergence rate of this new method under some mild conditions. Theoretical analysis and numerical results show that the new adaptive trust region radius is available and reasonable and the resultant trust region method is efficient in solving practical optimization problems. The work was supported in part by NSF grant CNS-0521142, USA.  相似文献   

15.
张卷美 《大学数学》2007,23(6):135-139
迭代方法是求解非线性方程近似根的重要方法.本文基于隐函数存在定理,提出了一种新的迭代方法收敛性和收敛阶数的证明方法,并分别对牛顿(Newton)和柯西(Cauchy)迭代方法迭代收敛性和收敛阶数进行了证明.最后,利用本文提出的证明方法,证明了基于三次泰勒(Taylor)展式构成的迭代格式是收敛的,收敛阶数至少为4,并提出猜想,基于n次泰勒展式构成的迭代格式是收敛的,收敛阶数至少为(n+1).  相似文献   

16.
一个新的共轭投影梯度算法及其超线性收敛性   总被引:7,自引:0,他引:7  
利用共轭投影梯度技巧,结合SQP算法的思想,建立了一个具有显示搜索方向的新算法,在适当的条件下,证明算法是全局收敛和强收敛的,且具有超线性收敛性,最后数值实验表明算法是有效的。  相似文献   

17.
利用Armijio条件和信赖域方法,构造新的价值函数.首次将内点算法与filter技术结合起来,提出一种求解非线性互补问题的新算法,即filter内点算法.在主算法中使用Armijio型线搜索求取步长,在修复算法中使用信赖域方法进行适当控制以保证算法的收敛性.文章还讨论了算法的全局收敛性.最后用数值实验表明了该方法是有效的.  相似文献   

18.
We propose a new nonmonotone filter method to promote global and fast local convergence for sequential quadratic programming algorithms. Our method uses two filters: a standard, global g-filter for global convergence, and a local nonmonotone l-filter that allows us to establish fast local convergence. We show how to switch between the two filters efficiently, and we prove global and superlinear local convergence. A special feature of the proposed method is that it does not require second-order correction steps. We present preliminary numerical results comparing our implementation with a classical filter SQP method.  相似文献   

19.
增广Lagrange方法是求解非线性规划的一种有效方法.从一新的角度证明不等式约束非线性非光滑凸优化问题的增广Lagrange方法的收敛性.用常步长梯度法的收敛性定理证明基于增广Lagrange函数的对偶问题的常步长梯度方法的收敛性,由此得到增广Lagrange方法乘子迭代的全局收敛性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号