首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
本文在目标函数是一致凸且采用Wolfe线搜索的条件下,给出无约束最优化问题的DFP算法的全局收敛性的几个充分性条件,并与「1」中的条件进行了比较。  相似文献   

2.
《Optimization》2012,61(2):339-352
Abstract

This paper analyzes the open question of the convergence of the DFP algorithm with inexact line searches. We prove that the DFP algorithm is convergent for quadratic uniformly convex function with commonly used inexact line searches.  相似文献   

3.
This paper is concerned with the open problem as to whether DFP method with inexact line search converges globally to the minimum of a uniformly convex function. We study this problem by way of a Gauss-Newton approach rather than an ordinary Newton approach. We also propose a derivative-free line search that can be implemented conveniently by a backtracking process and has such an attractive property that any iterative method with this line search generates a sequence of iterates that is approximately norm descent. Moreover, if the Jacobian matrices are uniformly nonsingular, then the generated sequenceconverges. Under appropriate conditions, we establish global and superlinear convergence of the proposed Gauss-Newton based DFP method, which supports the open problem positively.  相似文献   

4.
带非精确线搜索的调整搜索方向DFP算法   总被引:4,自引:0,他引:4  
本文介绍一类新的带调整搜索方向的Broyden算法.我们着重讨论带调整搜索方向的DFP算法的收敛性,在某些非精确线搜索的情况下,我们证明对连续可微目标函数,这算法是整体收敛的,而对一致凸目标函数,收敛速度是一步超线收敛的.从这篇文章的证明过程中,可以得到对一致凸目标函数,DFP算法具有一步超线形收敛.  相似文献   

5.
A Class of Revised Broyden Algorithms Without Exact Line Search   总被引:4,自引:0,他引:4  
In this paper, we discuss the convergence of the Broyden algorithms with revised search direction. Under some inexact line searches, we prove that the algorithms are globally convergent for continuously differentiable functions and the rate of local convergence of the algorithms is one-step superlinear and n-step second-order for uniformly convex objective functions.  相似文献   

6.
DFP算法的全局收敛性分析   总被引:2,自引:0,他引:2  
徐大川 《计算数学》1997,19(3):287-292
1引言理论分析和大量数值试验表明,在求解(1.1)的各种算法中,拟Newton法是效果最好的一类方法.DFP算法是最早提出的拟Newton法,它首先由Davidon[2]给出并由Fletcher和Powell【3]修改DFP算法的计算步骤如下:算法1.1.1”.取二R”,BIE*”“”对称正定,k:=1.2”.计算gb=7八kh),若gb—0,则终止,得解kk.否则,转入下一步.3O.dk——BK‘gb.4“.进行线搜索确定步长aa.在上面的算法中,步长0。的确定有两种方式:其一,精确线搜索,即。。满足:其M,非精确线搜索.本文考察WOlfe线搜索,即a&满足:其中o…  相似文献   

7.
On the Nonmonotone Line Search   总被引:10,自引:0,他引:10  
The technique of nonmonotone line search has received many successful applications and extensions in nonlinear optimization. This paper provides some basic analyses of the nonmonotone line search. Specifically, we analyze the nonmonotone line search methods for general nonconvex functions along different lines. The analyses are helpful in establishing the global convergence of a nonmonotone line search method under weaker conditions on the search direction. We explore also the relations between nonmonotone line search and R-linear convergence assuming that the objective function is uniformly convex. In addition, by taking the inexact Newton method as an example, we observe a numerical drawback of the original nonmonotone line search and suggest a standard Armijo line search when the nonmonotone line search condition is not satisfied by the prior trial steplength. The numerical results show the usefulness of such suggestion for the inexact Newton method.  相似文献   

8.
We introduced an algorithm for unconstrained optimization based on the transformation of the Newton method with the line search into a gradient descent method. Main idea used in the algorithm construction is approximation of the Hessian by an appropriate diagonal matrix. The steplength calculation algorithm is based on the Taylor’s development in two successive iterative points and the backtracking line search procedure. The linear convergence of the algorithm is proved for uniformly convex functions and strictly convex quadratic functions satisfying specified conditions.  相似文献   

9.
In this paper, we analyze the global convergence of the least-change secant method proposed by Dennis and Wolkowicz, when applied to convex objective functions. One of the most distinguished features of this method is that the Dennis-Wolkowicz update doesn't necessarily belong to the Broyden convex family and can be close to the DFP update, but it still has the self-correcting property. We prove that, for convex objective functions, this method with the commonly used Wolfe line search is globally convergent. We also provide some numerical results.  相似文献   

10.
This paper presents a method for minimizing the sum of a possibly nonsmooth convex function and a continuously differentiable function. As in the convex case developed by the author, the algorithm is a descent method which generates successive search directions by solving quadratic programming subproblems. An inexact line search ensures global convergence of the method to stationary points.  相似文献   

11.
In this paper, we investigate the strong convergence of an inexact proximal-point algorithm. It is known that the proximal-point algorithm converges weakly to a solution of a maximal monotone operator, but fails to converge strongly. Solodov and Svaiter (Math. Program. 87:189–202, 2000) introduced a new proximal-type algorithm to generate a strongly convergent sequence and established a convergence result in Hilbert space. Subsequently, Kamimura and Takahashi (SIAM J. Optim. 13:938–945, 2003) extended the Solodov and Svaiter result to the setting of uniformly convex and uniformly smooth Banach space. On the other hand, Rockafellar (SIAM J. Control Optim. 14:877–898, 1976) gave an inexact proximal-point algorithm which is more practical than the exact one. Our purpose is to extend the Kamimura and Takahashi result to a new inexact proximal-type algorithm. Moreover, this result is applied to the problem of finding the minimizer of a convex function on a uniformly convex and uniformly smooth Banach space. L.C. Zeng’s research was partially supported by the Teaching and Research Award Fund for Outstanding Young Teachers in Higher Education Institutions of MOE, China and by the Dawn Program Foundation in Shanghai. J.C. Yao’s research was partially supported by the National Science Council of the Republic of China.  相似文献   

12.
非拟牛顿非凸族的收敛性   总被引:11,自引:0,他引:11  
陈兰平  焦宝聪 《计算数学》2000,22(3):369-378
1.引言 对于无约束最优化问题拟牛顿法是目前最成熟,应用最广泛的解法之一.近二十多年来,对拟牛顿法收敛性质的研究一直是非线性最优化算法理论研究的热点.带非精确搜索的拟牛顿算法的研究是从1976年 Powell[1]开始,他证明了带 Wolfe搜索 BFGS算法的全局收敛性和超线性收敛性. 1978年 Byrd, Nocedal; Ya-Xiang Yuan[3]成功地将 Powell的结果推广到限制的 Brosden凸族. 1989年, Nocedal[4]在目标函数一致凸的条件下,证明了带回追搜索的BFG…  相似文献   

13.
A. El Ghali 《Optimization》2016,65(7):1497-1518
We present an implementable algorithm for minimizing a convex function which is not necessarily differentiable subject to linear equality constraints and to nonnegativity bounds on the variables. The algorithm is based on extending the variant proposed by Luenberger to the nondifferentiable case and using the bundle techniques introduced by Lemaréchal to approximate the subdifferential of the objective function. In particular, at each iteration, we compute a search direction by solving a quadratic subproblem, and an inexact line search along this direction yields a decrease in the objective value. Under some assumptions, the convergence of the proposed algorithm is analysed. Finally, some numerical results are presented, which show that the algorithm performs efficiently.  相似文献   

14.
On the convergence property of the DFP algorithm   总被引:2,自引:0,他引:2  
The DFP algorithm of unconstrained optimization possesses excellent properties of convergence for convex functions. However, a convergence theory of the DFP algorithm without the convexity assumption has not yet been established. This paper gives the following result: If the objective function is suitably smooth, and if the DFP algorithm produces a convergent point sequence, then the limit point of the sequence is a critical point of the objective function. Also, some open questions are mentioned.Supported by the National Science Foundation of China.  相似文献   

15.
We develop and analyze an affine scaling inexact generalized Newton algorithm in association with nonmonotone interior backtracking line technique for solving systems of semismooth equations subject to bounds on variables. By combining inexact affine scaling generalized Newton with interior backtracking line search technique, each iterate switches to inexact generalized Newton backtracking step to strict interior point feasibility. The global convergence results are developed in a very general setting of computing trial steps by the affine scaling generalized Newton-like method that is augmented by an interior backtracking line search technique projection onto the feasible set. Under some reasonable conditions we establish that close to a regular solution the inexact generalized Newton method is shown to converge locally p-order q-superlinearly. We characterize the order of local convergence based on convergence behavior of the quality of the approximate subdifferentials and indicate how to choose an inexact forcing sequence which preserves the rapid convergence of the proposed algorithm. A nonmonotonic criterion should bring about speeding up the convergence progress in some ill-conditioned cases.  相似文献   

16.
一个新的无约束优化超记忆梯度算法   总被引:3,自引:0,他引:3  
时贞军 《数学进展》2006,35(3):265-274
本文提出一种新的无约束优化超记忆梯度算法,算法利用当前点的负梯度和前一点的负梯度的线性组合为搜索方向,以精确线性搜索和Armijo搜索确定步长.在很弱的条件下证明了算法具有全局收敛性和线性收敛速度.因算法中避免了存贮和计算与目标函数相关的矩阵,故适于求解大型无约束优化问题.数值实验表明算法比一般的共轭梯度算法有效.  相似文献   

17.
We propose an inexact Newton method with a filter line search algorithm for nonconvex equality constrained optimization. Inexact Newton’s methods are needed for large-scale applications which the iteration matrix cannot be explicitly formed or factored. We incorporate inexact Newton strategies in filter line search, yielding algorithm that can ensure global convergence. An analysis of the global behavior of the algorithm and numerical results on a collection of test problems are presented.  相似文献   

18.
一类全局收敛的记忆梯度法及其线性收敛性   总被引:18,自引:0,他引:18  
本文研究一类新的解无约束最优化问题的记忆梯度法,在强Wolfe线性搜索下证明了其全局收敛性.当目标函数为一致凸函数时,对其线性收敛速率进行了分析.数值试验表明算法是很有效的.  相似文献   

19.
张明望  黄崇超 《应用数学》2004,17(2):315-321
对框式凸二次规划问题提出了一种非精确不可行内点算法 ,该算法使用的迭代方向仅需要达到一个相对的精度 .在初始点位于中心线的某邻域内的假设下 ,证明了算法的全局收敛性  相似文献   

20.
本文给出了一类线性约束下不可微量优化问题的可行下降方法,这类问题的目标函数是凸函数和可微函数的合成函数,算法通过解系列二次规划寻找可行下降方向,新的迭代点由不精确线搜索产生,在较弱的条件下,我们证明了算法的全局收敛性  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号