首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
研究一种新的无约束优化超记忆梯度算法,算法在每步迭代中充分利用前面迭代点的信息产生下降方向,利用Wolfe线性搜索产生步长,在较弱的条件下证明了算法的全局收敛性。新算法在每步迭代中不需计算和存储矩阵,适于求解大规模优化问题。  相似文献   

2.
结合非单调信赖域方法,和非单调线搜索技术,提出了一种新的无约束优化算法.信赖域方法的每一步采用线搜索,使得迭代每一步都充分下降加快了迭代速度.在一定条件下,证明了算法具有全局收敛性和局部超线性.收敛速度.数值试验表明算法是十分有效的.  相似文献   

3.
一个新的无约束优化超记忆梯度算法   总被引:3,自引:0,他引:3  
时贞军 《数学进展》2006,35(3):265-274
本文提出一种新的无约束优化超记忆梯度算法,算法利用当前点的负梯度和前一点的负梯度的线性组合为搜索方向,以精确线性搜索和Armijo搜索确定步长.在很弱的条件下证明了算法具有全局收敛性和线性收敛速度.因算法中避免了存贮和计算与目标函数相关的矩阵,故适于求解大型无约束优化问题.数值实验表明算法比一般的共轭梯度算法有效.  相似文献   

4.
刘景辉  马昌凤  陈争 《计算数学》2012,34(3):275-284
在传统信赖域方法的基础上, 提出了求解无约束最优化问题的一个新的带线搜索的信赖域算法. 该算法采用大步长 Armijo 线搜索技术获得迭代步长, 克服了每次迭代求解信赖域子问题时计算量较大的缺点, 因而适用于求解大型的优化问题. 在适当的条件下, 我们证明了算法的全局收敛性. 数值实验结果表明本文所提出的算法是有效的.  相似文献   

5.
一类新的曲线搜索下的多步下降算法   总被引:1,自引:0,他引:1  
提出一类新的曲线搜索下的多步下降算法,在较弱条件下证明了算法具有全局收敛性和线性收敛速率.算法利用前面多步迭代点的信息和曲线搜索技巧产生新的迭代点,收敛稳定,不用计算和存储矩阵,适于求解大规模优化问题.数值试验表明算法是有效的.  相似文献   

6.
This paper presents a new supermemory gradient method for unconstrained optimization problems. It can be regarded as a combination of ODE-based methods, line search and subspace techniques. The main characteristic of this method is that, at each iteration, a lower dimensional system of linear equations is solved only once to obtain a trial step, thus avoiding solving a quadratic trust region subproblem. Another is that when a trial step is not accepted, this proposed method generates an iterative point whose step-length satisfies Armijo line search rule, thus avoiding resolving linear system of equations. Under some reasonable assumptions, the method is proven to be globally convergent. Numerical results show the efficiency of this proposed method in practical computation.  相似文献   

7.
一类带非单调线搜索的信赖域算法   总被引:1,自引:0,他引:1  
通过将非单调Wolfe线搜索技术与传统的信赖域算法相结合,我们提出了一类新的求解无约束最优化问题的信赖域算法.新算法在每一迭代步只需求解一次信赖域子问题,而且在每一迭代步Hesse阵的近似都满足拟牛顿条件并保持正定传递.在一定条件下,证明了算法的全局收敛性和强收敛性.数值试验表明新算法继承了非单调技术的优点,对于求解某...  相似文献   

8.
This paper presents a nonmonotone supermemory gradient algorithm for unconstrained optimization problems. At each iteration, this proposed method sufficiently uses the previous multi-step iterative information and avoids the storage and computation of matrices associated with the Hessian of objective functions, thus it is suitable to solve large-scale optimization problems and can converge stably. Under some assumptions, the convergence properties of the proposed algorithm are analyzed. Numerical results are also reported to show the efficiency of this proposed method.  相似文献   

9.
In this paper, a new nonmonotone inexact line search rule is proposed and applied to the trust region method for unconstrained optimization problems. In our line search rule, the current nonmonotone term is a convex combination of the previous nonmonotone term and the current objective function value, instead of the current objective function value . We can obtain a larger stepsize in each line search procedure and possess nonmonotonicity when incorporating the nonmonotone term into the trust region method. Unlike the traditional trust region method, the algorithm avoids resolving the subproblem if a trial step is not accepted. Under suitable conditions, global convergence is established. Numerical results show that the new method is effective for solving unconstrained optimization problems.  相似文献   

10.
本文提出一种新的无约束优化记忆梯度算法,在Armijo搜索下,该算法在每步迭代时利用了前面迭代点的信息,增加了参数选择的自由度,适于求解大规模无约束优化问题。分析了算法的全局收敛性。  相似文献   

11.
本文提出一种新的无约束优化记忆梯度算法,算法在每步迭代时利用了前面迭代点的信息,增加了参数选择的自由度,适于求解大规模无约束优化问题.分析了算法的全局收敛性.数值试验表明算法是有效的.  相似文献   

12.
In this paper, based on a simple model of the trust region subproblem, we propose a new self-adaptive trust region method with a line search technique for solving unconstrained optimization problems. By use of the simple subproblem model, the new method needs less memory capacitance and computational complexity. And the trust region radius is adjusted with a new self-adaptive adjustment strategy which makes full use of the information at the current point. When the trial step results in an increase in the objective function, the method does not resolve the subproblem, but it performs a line search technique from the failed point. Convergence properties of the method are proved under certain conditions. Numerical experiments show that the new method is effective and attractive for large-scale optimization problems.  相似文献   

13.
This paper presents a hybrid trust region algorithm for unconstrained optimization problems. It can be regarded as a combination of ODE-based methods, line search and trust region techniques. A feature of the proposed method is that at each iteration, a system of linear equations is solved only once to obtain a trial step. Further, when the trial step is not accepted, the method performs an inexact line search along it instead of resolving a new linear system. Under reasonable assumptions, the algorithm is proven to be globally and superlinearly convergent. Numerical results are also reported that show the efficiency of this proposed method.  相似文献   

14.
Trust region methods are powerful and effective optimization methods.The conic model method is a new type of method with more information available at each iteration than standard quadratic-based methods.The advantages of the above two methods can be combined to form a more powerful method for constrained optimization.The trust region subproblem of our method is to minimize a conic function subject to the linearized constraints and trust region bound.At the same time,the new algorithm still possesses robust global properties.The global convergence of the new algorithm under standard conditions is established.  相似文献   

15.
We propose a nonmonotone adaptive trust region method based on simple conic model for unconstrained optimization. Unlike traditional trust region methods, the subproblem in our method is a simple conic model, where the Hessian of the objective function is approximated by a scalar matrix. The trust region radius is adjusted with a new self-adaptive adjustment strategy which makes use of the information of the previous iteration and current iteration. The new method needs less memory and computational efforts. The global convergence and Q-superlinear convergence of the algorithm are established under the mild conditions. Numerical results on a series of standard test problems are reported to show that the new method is effective and attractive for large scale unconstrained optimization problems.  相似文献   

16.
无约束优化问题的对角稀疏拟牛顿法   总被引:3,自引:0,他引:3  
对无约束优化问题提出了对角稀疏拟牛顿法,该算法采用了Armijo非精确线性搜索,并在每次迭代中利用对角矩阵近似拟牛顿法中的校正矩阵,使计算搜索方向的存贮量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性,线性收敛速度并分析了超线性收敛特征。数值实验表明算法比共轭梯度法有效,适于求解大型无约束优化问题.  相似文献   

17.
In this paper, we propose a new trust region method for unconstrained optimization problems. The new trust region method can automatically adjust the trust region radius of related subproblems at each iteration and has strong global convergence under some mild conditions. We also analyze the global linear convergence, local superlinear and quadratic convergence rate of the new method. Numerical results show that the new trust region method is available and efficient in practical computation.  相似文献   

18.
Conjugate gradient methods are probably the most famous iterative methods for solving large scale optimization problems in scientific and engineering computation, characterized by the simplicity of their iteration and their low memory requirements. It is well known that the search direction plays a main role in the line search method. In this article, we propose a new search direction with the Wolfe line search technique for solving unconstrained optimization problems. Under the above line searches and some assumptions, the global convergence properties of the given methods are discussed. Numerical results and comparisons with other CG methods are given.  相似文献   

19.
孙敏 《大学数学》2007,23(6):86-89
提出一种求解无约束优化问题的非单调多步曲线搜索方法.此方法具有如下特点:(1)算法在产生下一个迭代点时不仅利用了当前迭代点的信息,而且还可能利用前m个迭代点的信息.这就是多步法;(2)下降方向和步长同时确定,而不是先找到方向,再由线性搜索寻找步长.这就是曲线搜索技术;(3)采用非单调搜索技巧.在较弱的条件下,我们证明了此方法的收敛性.  相似文献   

20.
一类新的记忆梯度法及其全局收敛性   总被引:1,自引:0,他引:1  
研究了求解无约束优化问题的记忆梯度法,利用当前和前面迭代点的信息产生下降方向,得到了一类新的无约束优化算法,在Wolfe线性搜索下证明了其全局收敛性.新算法结构简单,不用计算和存储矩阵,适于求解大型优化问题.数值试验表明算法有效.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号