首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
刘景辉  马昌凤  陈争 《计算数学》2012,34(3):275-284
在传统信赖域方法的基础上, 提出了求解无约束最优化问题的一个新的带线搜索的信赖域算法. 该算法采用大步长 Armijo 线搜索技术获得迭代步长, 克服了每次迭代求解信赖域子问题时计算量较大的缺点, 因而适用于求解大型的优化问题. 在适当的条件下, 我们证明了算法的全局收敛性. 数值实验结果表明本文所提出的算法是有效的.  相似文献   

2.
文章结合非单调信赖域方法和非单调线搜索技术提出了一类新的无约束优化算法.与传统的非单调信赖与算法相比,此算法在每步都采用非单调Wolfe线搜索得到下一个迭代点,信赖域半径由子问题的近似解和线搜索的步长调节,这样得到的新算法不仅不需重解子问题,而且在每步迭代保证目标函数的近似海赛矩阵的正定性,在一定条件下证明了算法具有全局收敛性和Q-二次收敛性.数值试验表明算法是十分有效的.  相似文献   

3.
结合非单调信赖域方法,和非单调线搜索技术,提出了一种新的无约束优化算法.信赖域方法的每一步采用线搜索,使得迭代每一步都充分下降加快了迭代速度.在一定条件下,证明了算法具有全局收敛性和局部超线性.收敛速度.数值试验表明算法是十分有效的.  相似文献   

4.
一类拟牛顿非单调信赖域算法及其收敛性   总被引:2,自引:0,他引:2  
刘培培  陈兰平 《数学进展》2008,37(1):92-100
本文提出了一类求解无约束最优化问题的非单调信赖域算法.将非单调Wolfe线搜索技术与信赖域算法相结合,使得新算-法不仅不需重解子问题,而且在每步迭代都满足拟牛顿方程同时保证目标函数的近似Hasse阵Bk的正定性.在适当的条件下,证明了此算法的全局收敛性.数值结果表明该算法的有效性.  相似文献   

5.
一类带线搜索的非单调信赖域算法   总被引:15,自引:0,他引:15  
本文对于无约束最优化问题提出了一类新的非单调信赖域算法.与通常的非单调信赖域算法不同,当试探步不成功时,并不重解信赖域子问题,而采用非单调线搜索,从而减小了计算量.在适当的条件下,证明了此算法的全局收敛性.  相似文献   

6.
基于信赖域技术的处理带线性约束优化的内点算法   总被引:1,自引:0,他引:1  
欧宜贵  刘琼林 《应用数学》2005,18(3):365-372
基于信赖域技术,本文提出了一个求解带线性等式和非负约束优化问题的内点算法,其特点是:为了求得搜索方向,算法在每一步迭代时仅需要求解一线性方程组系统,从而避免了求解带信赖域界的子问题,然后利用非精确的Armijo线搜索法来得到下一个迭代内点. 从数值计算的观点来看,这种技巧可减少计算量.在适当的条件下,文中还证明了该算法所产生的迭代序列的每一个聚点都是原问题的KKT点.  相似文献   

7.
一类带非单调线搜索的信赖域算法   总被引:1,自引:0,他引:1  
通过将非单调Wolfe线搜索技术与传统的信赖域算法相结合,我们提出了一类新的求解无约束最优化问题的信赖域算法.新算法在每一迭代步只需求解一次信赖域子问题,而且在每一迭代步Hesse阵的近似都满足拟牛顿条件并保持正定传递.在一定条件下,证明了算法的全局收敛性和强收敛性.数值试验表明新算法继承了非单调技术的优点,对于求解某...  相似文献   

8.
刘海林 《经济数学》2007,24(2):213-216
本文提出一个新的非线性最小二乘的信赖域方法,在该方法中每个信赖域子问题只需要一次求解,而且每次迭代的一维搜索步长因子是给定的,避开一维搜索的环节,大大地提高了算法效率.文中证明了在一定的条件下算法的全局收敛性.  相似文献   

9.
基于非单调线搜索技术和IMPBOT算法,提出了一个求解无约束优化问题的ODE型混合方法.该方法的主要特点是:为了求得试验步,该方法在每次迭代时不必求解带信赖域界的子问题,仅需要求解一线性方程组系统;当试验步不被接受时,该方法就执行改进的Wolfe-型非单调线搜索来获得下一个新的迭代点,从而避免了反复求解线性方程组系统. 在一定条件下,所提算法还是整体收敛和超线性收敛的. 数值试验结果表明该方法是有效的.  相似文献   

10.
景书杰  苗荣  李少娟 《数学杂志》2014,34(3):569-576
本文研究了无约束最优化问题.利用MBFGS信赖域算法的基本思想,通过对BFGS校正公式的改进,并结合线搜索技术,提出了一种新的MBFGS信赖域算法,拓宽了信赖域算法的适用范围,并在一定条件下证明了该算法的全局收敛性和超线性收敛性.  相似文献   

11.
An arc method is presented for solving the equality constrained nonlinear programming problem. The curvilinear search path used at each iteration of the algorithm is a second-order approximation to the geodesic of the constraint surface which emanates from the current feasible point and has the same initial heading as the projected negative gradient at that point. When the constraints are linear, or when the step length is sufficiently small, the algorithm reduces to Rosen's Gradient Projection Method.  相似文献   

12.
The search direction in unconstrained minimization algorithms for large‐scale problems is usually computed as an iterate of the preconditioned) conjugate gradient method applied to the minimization of a local quadratic model. In line‐search procedures this direction is required to satisfy an angle condition that says that the angle between the negative gradient at the current point and the direction is bounded away from π/2. In this paper, it is shown that the angle between conjugate gradient iterates and the negative gradient strictly increases as far as the conjugate gradient algorithm proceeds. Therefore, the interruption of the conjugate gradient sub‐algorithm when the angle condition does not hold is theoretically justified. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper, a new gradient-related algorithm for solving large-scale unconstrained optimization problems is proposed. The new algorithm is a kind of line search method. The basic idea is to choose a combination of the current gradient and some previous search directions as a new search direction and to find a step-size by using various inexact line searches. Using more information at the current iterative step may improve the performance of the algorithm. This motivates us to find some new gradient algorithms which may be more effective than standard conjugate gradient methods. Uniformly gradient-related conception is useful and it can be used to analyze global convergence of the new algorithm. The global convergence and linear convergence rate of the new algorithm are investigated under diverse weak conditions. Numerical experiments show that the new algorithm seems to converge more stably and is superior to other similar methods in many situations.  相似文献   

14.
In this paper, an algorithm is developed for solving a nonlinear programming problem with linear contraints. The algorithm performs two major computations. First, the search vector is determined by projecting the negative gradient of the objective function on a polyhedral set defined in terms of the gradients of the equality constraints and the near binding inequality constraints. This least-distance program is solved by Lemke's complementary pivoting algorithm after eliminating the equality constraints using Cholesky's factorization. The second major calculation determines a stepsize by first computing an estimate based on quadratic approximation of the function and then finalizing the stepsize using Armijo's inexact line search. It is shown that any accumulation point of the algorithm is a Kuhn-Tucker point. Furthermore, it is shown that, if an accumulation point satisfies the second-order sufficiency optimality conditions, then the whole sequence of iterates converges to that point. Computational testing of the algorithm is presented.  相似文献   

15.
The majority of first-order methods for large-scale convex–concave saddle point problems and variational inequalities with monotone operators are proximal algorithms. To make such an algorithm practical, the problem’s domain should be proximal-friendly—admit a strongly convex function with easy to minimize linear perturbations. As a by-product, this domain admits a computationally cheap linear minimization oracle (LMO) capable to minimize linear forms. There are, however, important situations where a cheap LMO indeed is available, but the problem domain is not proximal-friendly, which motivates search for algorithms based solely on LMO. For smooth convex minimization, there exists a classical algorithm using LMO—conditional gradient. In contrast, known to us similar techniques for other problems with convex structure (nonsmooth convex minimization, convex–concave saddle point problems, even as simple as bilinear ones, and variational inequalities with monotone operators, even as simple as affine) are quite recent and utilize common approach based on Fenchel-type representations of the associated objectives/vector fields. The goal of this paper was to develop alternative (and seemingly much simpler) decomposition techniques based on LMO for bilinear saddle point problems and for variational inequalities with affine monotone operators.  相似文献   

16.
Interior–point algorithms are among the most efficient techniques for solving complementarity problems. In this paper, a procedure for globalizing interior–point algorithms by using the maximum stepsize is introduced. The algorithm combines exact or inexact interior–point and projected–gradient search techniques and employs a line–search procedure for the natural merit function associated with the complementarity problem. For linear problems, the maximum stepsize is shown to be acceptable if the Newton interior–point search direction is employed. Complementarity and optimization problems are discussed, for which the algorithm is able to process by either finding a solution or showing that no solution exists. A modification of the algorithm for dealing with infeasible linear complementarity problems is introduced which, in practice, employs only interior–point search directions. Computational experiments on the solution of complementarity problems and convex programming problems by the new algorithm are included.  相似文献   

17.
A primal, interior point method is developed for linear programming problems for which the linear objective function is to be maximised over polyhedra that are not necessarily in standard form. This algorithm concurs with the affine scaling method of Dikin when the polyhedron is in standard form, and satisfies the usual conditions imposed for using that method. If the search direction is regarded as a function of the current iterate, then it is shown that this function has a unique, continuous extension to the boundary. In fact, on any given face, this extension is just the value the search direction would have for the problem of maximising the objective function over that face. This extension is exploited to prove convergence. The algorithm presented here can be used to exploit such special constraint structure as bounds, ranges, and free variables without increasing the size of the linear programming problem.This paper is in final form and no version of it will be submitted for publication elsewhere.  相似文献   

18.
《Optimization》2012,61(8):1283-1295
In this article we present the fundamental idea, concepts and theorems of a basic line search algorithm for solving linear programming problems which can be regarded as an extension of the simplex method. However, unlike the iteration of the simplex method from a basic point to an improved adjacent basic point via pivot operation, the basic line search algorithm, also by pivot operation, moves from a basic line which contains two basic feasible points to an improved basic line which also contains two basic feasible points whose objective values are no worse than that of the two basic feasible points on the previous basic line. The basic line search algorithm may skip some adjacent vertices so that it converges to an optimal solution faster than the simplex method. For example, for a 2-dimensional problem, the basic line search algorithm can find an optimal solution with only one iteration.  相似文献   

19.
本文提出一个求解非线性不等式约束优化问题的带有共轭梯度参数的广义梯度投影算法.算法中的共轭梯度参数是很容易得到的,且算法的初始点可以任意选取.而且,由于算法仅使用前一步搜索方向的信息,因而减少了计算量.在较弱条件下得到了算法的全局收敛性.数值结果表明算法是有效的.  相似文献   

20.
Although quasi-Newton algorithms generally converge in fewer iterations than conjugate gradient algorithms, they have the disadvantage of requiring substantially more storage. An algorithm will be described which uses an intermediate (and variable) amount of storage and which demonstrates convergence which is also intermediate, that is, generally better than that observed for conjugate gradient algorithms but not so good as in a quasi-Newton approach. The new algorithm uses a strategy of generating a form of conjugate gradient search direction for most iterations, but it periodically uses a quasi-Newton step to improve the convergence. Some theoretical background for a new algorithm has been presented in an earlier paper; here we examine properties of the new algorithm and its implementation. We also present the results of some computational experience.This research was supported by the National Research Council of Canada grant number A-8962.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号