首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 78 毫秒
1.
This paper presents a new supermemory gradient method for unconstrained optimization problems. It can be regarded as a combination of ODE-based methods, line search and subspace techniques. The main characteristic of this method is that, at each iteration, a lower dimensional system of linear equations is solved only once to obtain a trial step, thus avoiding solving a quadratic trust region subproblem. Another is that when a trial step is not accepted, this proposed method generates an iterative point whose step-length satisfies Armijo line search rule, thus avoiding resolving linear system of equations. Under some reasonable assumptions, the method is proven to be globally convergent. Numerical results show the efficiency of this proposed method in practical computation.  相似文献   

2.
This paper presents a hybrid trust region algorithm for unconstrained optimization problems. It can be regarded as a combination of ODE-based methods, line search and trust region techniques. A feature of the proposed method is that at each iteration, a system of linear equations is solved only once to obtain a trial step. Further, when the trial step is not accepted, the method performs an inexact line search along it instead of resolving a new linear system. Under reasonable assumptions, the algorithm is proven to be globally and superlinearly convergent. Numerical results are also reported that show the efficiency of this proposed method.  相似文献   

3.
In this article, an ODE-based trust region filter algorithm for unconstrained optimization is proposed. It can be regarded as a combination of trust region and filter techniques with ODE-based methods. Unlike the existing trust-region-filter methods and ODE-based methods, a distinct feature of this method is that at each iteration, a reduced linear system is solved to obtain a trial step, thus avoiding solving a trust region subproblem. Under some standard assumptions, it is proven that the algorithm is globally convergent. Preliminary numerical results show that the new algorithm is efficient for large scale problems.  相似文献   

4.
This paper presents a hybrid ODE-based method for unconstrained optimization problems, which combines the idea of IMPBOT with the subspace technique and a fixed step-length. The main characteristic of this method is that at each iteration, a lower dimensional system of linear equations is solved only once to obtain a trial step. Another is that when a trial step is not accepted, this proposed method uses minimization of a convex overestimation, thus avoiding performing a line search to compute a step-length. Under some reasonable assumptions, the method is proven to be globally convergent. Numerical results show the efficiency of this proposed method in practical computations, especially for solving small scale unconstrained optimization problems.  相似文献   

5.
Hybridizing monotone and nonmonotone approaches, we employ a modified trust region ratio in which more information is provided about the agreement between the exact and the approximate models. Also, we use an adaptive trust region radius as well as two accelerated Armijo-type line search strategies to avoid resolving the trust region subproblem whenever a trial step is rejected. We show that the proposed algorithm is globally and locally superlinearly convergent. Comparative numerical experiments show practical efficiency of the proposed accelerated adaptive trust region algorithm.  相似文献   

6.
刘亚君  刘新为 《计算数学》2016,38(1):96-112
梯度法是求解无约束最优化的一类重要方法.步长选取的好坏与梯度法的数值表现息息相关.注意到BB步长隐含了目标函数的二阶信息,本文将BB法与信赖域方法相结合,利用BB步长的倒数去近似目标函数的Hesse矩阵,同时利用信赖域子问题更加灵活地选取梯度法的步长,给出求解无约束最优化问题的单调和非单调信赖域BB法.在适当的假设条件下,证明了算法的全局收敛性.数值试验表明,与已有的求解无约束优化问题的BB类型的方法相比,非单调信赖域BB法中e_k=‖x_k-x~*‖的下降呈现更明显的阶梯状和单调性,因此收敛速度更快.  相似文献   

7.
This paper presents a new trust region algorithm for solving nonsmooth nonlinear equation problems which posses the smooth plus non-smooth decomposition. At each iteration, this method obtains a trial step by solving a system of linear equations, hence avoiding the need for solving a quadratic programming subproblem with a trust region bound. From a computational point of view, this approach may reduce computational effort and hence improve computational efficiency. Furthermore, it is proved under appropriate assumptions that this algorithm is globally and locally super-linearly convergent. Some numerical examples are reported.  相似文献   

8.
This paper concerns a filter technique and its application to the trust region method for nonlinear programming (NLP) problems. We used our filter trust region algorithm to solve NLP problems with equality and inequality constraints, instead of solving NLP problems with just inequality constraints, as was introduced by Fletcher et al. [R. Fletcher, S. Leyffer, Ph.L. Toint, On the global converge of an SLP-filter algorithm, Report NA/183, Department of Mathematics, Dundee University, Dundee, Scotland, 1999]. We incorporate this filter technique into the traditional trust region method such that the new algorithm possesses nonmonotonicity. Unlike the tradition trust region method, our algorithm performs a nonmonotone filter technique to find a new iteration point if a trial step is not accepted. Under mild conditions, we prove that the algorithm is globally convergent.  相似文献   

9.
本文提供修正近似信赖域类型路经三类预条件弧线路径方法解无约束最优化问题.使用对称矩阵的稳定Bunch-Parlett易于形成信赖域子问题的弧线路径,使用单位下三角矩阵作为最优路径和修正梯度路径的预条件因子.运用预条件因子改进Hessian矩阵特征值分布加速预条件共轭梯度路径收敛速度.基于沿着三类路径信赖域子问题产生试探步,将信赖域策略与非单调线搜索技术相结合作为新的回代步.理论分析证明在合理条件下所提供的算法是整体收敛性,并且具有局部超线性收敛速率,数值结果表明算法的有效性.  相似文献   

10.
An algorithm for solving the problem of minimizing a quadratic function subject to ellipsoidal constraints is introduced. This algorithm is based on the impHcitly restarted Lanczos method to construct a basis for the Krylov subspace in conjunction with a model trust region strategy to choose the step. The trial step is computed on the small dimensional subspace that lies inside the trust region.

One of the main advantages of this algorithm is the way that the Krylov subspace is terminated. We introduce a terminationcondition that allows the gradient to be decreased on that subspace.

A convergence theory for this algorithm is presented. It is shown that this algorithm is globally convergent and it shouldcope quite well with large scale minimization problems. This theory is sufficiently general that it holds for any algorithm that projects the problem on a lower dimensional subspace.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号