首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Convergence properties of a class of multi-directional parallel quasi-Newton algorithmsfor the solution of unconstrained minimization problems are studied in this paper.At eachiteration these algorithms generate several different quasi-Newton directions,and thenapply line searches to determine step lengths along each direction,simultaneously.Thenext iterate is obtained among these trail points by choosing the lowest point in the sense offunction reductions.Different quasi-Newton updating formulas from the Broyden familyare used to generate a main sequence of Hessian matrix approximations.Based on theBFGS and the modified BFGS updating formulas,the global and superlinear convergenceresults are proved.It is observed that all the quasi-Newton directions asymptoticallyapproach the Newton direction in both direction and length when the iterate sequenceconverges to a local minimum of the objective function,and hence the result of superlinearconvergence follows.  相似文献   

2.
Molecular similarity index measures the similarity between two molecules. Computing the optimal similarity index is a hard global optimization problem. Since the objective function value is very hard to compute and its gradient vector is usually not available, previous research has been based on non-gradient algorithms such as random search and the simplex method. In a recent paper, McMahon and King introduced a Gaussian approximation so that both the function value and the gradient vector can be computed analytically. They then proposed a steepest descent algorithm for computing the optimal similarity index of small molecules. In this paper, we consider a similar problem. Instead of computing atom-based derivatives, we directly compute the derivatives with respect to the six free variables describing the relative positions of the two molecules.. We show that both the function value and gradient vector can be computed analytically and apply the more advanced BFGS method in addition to the steepest descent algorithm. The algorithms are applied to compute the similarities among the 20 amino acids and biomolecules like proteins. Our computational results show that our algorithm can achieve more accuracy than previous methods and has a 6-fold speedup over the steepest descent method.  相似文献   

3.
We develop an affine-scaling algorithm for box-constrained optimization which has the property that each iterate is a scaled cyclic Barzilai–Borwein (CBB) gradient iterate that lies in the interior of the feasible set. Global convergence is established for a nonmonotone line search, while there is local R-linear convergence at a nondegenerate local minimizer where the second-order sufficient optimality conditions are satisfied. Numerical experiments show that the convergence speed is insensitive to problem conditioning. The algorithm is particularly well suited for image restoration problems which arise in positron emission tomography where the cost function can be infinite on the boundary of the feasible set. This material is based upon work supported by the National Science Foundation under Grants 0203270, 0619080, and 0620286.  相似文献   

4.
We present an algorithm for very large-scale linearly constrained nonlinear programming (LCNP) based on a Limited-Storage Quasi-newton method. In large-scale programming solving the reduced Newton equation at each iteration can be expensive and may not be justified when far from a local solution; besides, the amount of storage required by the reduced Hessian matrix, and even the computing time for its Quasi-Newton approximation, may be prohibitive. An alternative based on the reduced Truncated-Newton methodology, that has proved to be satisfactory for large-scale problems, is not recommended for very large-scale problems since it requires an additional gradient evaluation and the solving of two systems of linear equations per each minor iteration. We recommend a 2-step BFGS approximation of the inverse of the reduced Hessian matrix that does not require to store any matrix since the product matrix-vector is the vector to be approximated; it uses the reduced gradient and information from two previous iterations and the so-termed restart iteration. A diagonal direct BFGS preconditioning is used.  相似文献   

5.
《Optimization》2012,61(2):137-150
An algorithm for addressing multiple objective linear programming (MOLP) problems is presented. The algorithm modifies the path-following primal-dual algorithm to MOLP problems by using the single objective algorithm to generate interior search directions and later combine them to derive a single direction along which to step to the next iterate. Combining the different interior search directions is done by interacting with a Decision Maker (DM) to obtain locally-relevant preference information for the value vectors along these directions. This preference information is then used to derive an approximation to the gradient of an implicity-known utility function, and using a projection of this gradient provides a direction gradient of an implicitly-known utility function, and using a projection of this gradient provides a direction vector along which we step to the next iterate. At each iteration the algorithm also generates boundary points that aid in deriving the combined search direction. We refer to these boundary points, generated sequentially during the process, as anchor points that serve as candidate solutions at which to terminate the iterative process.  相似文献   

6.
This paper is concerned with collinear scaling algorithms for unconstrained minimization where the underlying local approximants are forced to interpolate the objective function value and gradient at only the two most recent iterates. By suitably modifying the procedure of Sorensen (1980) for deriving such algorithms, we show that two members of the algorithm class derived related to the DFP and BFGS methods respectively are locally and q-superlinearly convergent. This local analysis as well as the results they yield exhibit the same sort of duality exhibited by those of Broyden, Dennis and Moré (1973) and Dennis and Moré (1974) for the DFP and BFGS methods. The results in this paper also imply the local and q-superlinear convergence of collinear scaling algorithms of Sorensen (1982, pp. 154–156) related to the DFP and BFGS methods.Research supported in part by funds provided by the Washington State University Research and Arts Committee, by NSF Grant DMS-8414460 and by DOE Grant DE-FG06-85ER25007.  相似文献   

7.
In this paper, an adaptive trust region algorithm that uses Moreau–Yosida regularization is proposed for solving nonsmooth unconstrained optimization problems. The proposed algorithm combines a modified secant equation with the BFGS update formula and an adaptive trust region radius, and the new trust region radius utilizes not only the function information but also the gradient information. The global convergence and the local superlinear convergence of the proposed algorithm are proven under suitable conditions. Finally, the preliminary results from comparing the proposed algorithm with some existing algorithms using numerical experiments reveal that the proposed algorithm is quite promising for solving nonsmooth unconstrained optimization problems.  相似文献   

8.
万中  冯冬冬 《计算数学》2011,33(4):387-396
基于非单调线搜索在寻求优化问题最优解中的优越性,提出了一类新的非单调保守BFGS算法.同已有方法不同,该算法中用来控制非单调性程度的算法参数不是取固定值,而是利用已有目标函数和梯度函数的信息自动调整其取值,以改善算法的数值表现.在合适的假设条件下,建立了新的非单调保守BFGS算法的全局收敛性.用基准测试优化问题测试了算...  相似文献   

9.
A Conic Trust-Region Method for Nonlinearly Constrained Optimization   总被引:5,自引:0,他引:5  
Trust-region methods are powerful optimization methods. The conic model method is a new type of method with more information available at each iteration than standard quadratic-based methods. Can we combine their advantages to form a more powerful method for constrained optimization? In this paper we give a positive answer and present a conic trust-region algorithm for non-linearly constrained optimization problems. The trust-region subproblem of our method is to minimize a conic function subject to the linearized constraints and the trust region bound. The use of conic functions allows the model to interpolate function values and gradient values of the Lagrange function at both the current point and previous iterate point. Since conic functions are the extension of quadratic functions, they approximate general nonlinear functions better than quadratic functions. At the same time, the new algorithm possesses robust global properties. In this paper we establish the global convergence of the new algorithm under standard conditions.  相似文献   

10.
The search direction in unconstrained minimization algorithms for large‐scale problems is usually computed as an iterate of the preconditioned) conjugate gradient method applied to the minimization of a local quadratic model. In line‐search procedures this direction is required to satisfy an angle condition that says that the angle between the negative gradient at the current point and the direction is bounded away from π/2. In this paper, it is shown that the angle between conjugate gradient iterates and the negative gradient strictly increases as far as the conjugate gradient algorithm proceeds. Therefore, the interruption of the conjugate gradient sub‐algorithm when the angle condition does not hold is theoretically justified. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

11.
BFGS算法对非凸函数优化问题的收敛性   总被引:1,自引:0,他引:1  
BFGS算法是无约束最优化中最著名的数值算法之一,对非凸函数BFGS算法是否具有整体收敛性,这是一个open问题,本文考虑Wolfo线搜索下目标函数非凸的BFGS算法,我们给出一个使该算法收敛的充分条件。  相似文献   

12.
Based on the NEWUOA algorithm, a new derivative-free algorithm is developed, named LCOBYQA. The main aim of the algorithm is to find a minimizer $x^{*} \in\mathbb{R}^{n}$ of a non-linear function, whose derivatives are unavailable, subject to linear inequality constraints. The algorithm is based on the model of the given function constructed from a set of interpolation points. LCOBYQA is iterative, at each iteration it constructs a quadratic approximation (model) of the objective function that satisfies interpolation conditions, and leaves some freedom in the model. The remaining freedom is resolved by minimizing the Frobenius norm of the change to the second derivative matrix of the model. The model is then minimized by a trust-region subproblem using the conjugate gradient method for a new iterate. At times the new iterate is found from a model iteration, designed to improve the geometry of the interpolation points. Numerical results are presented which show that LCOBYQA works well and is very competing against available model-based derivative-free algorithms.  相似文献   

13.
This letter presents a scaled memoryless BFGS preconditioned conjugate gradient algorithm for solving unconstrained optimization problems. The basic idea is to combine the scaled memoryless BFGS method and the preconditioning technique in the frame of the conjugate gradient method. The preconditioner, which is also a scaled memoryless BFGS matrix, is reset when the Powell restart criterion holds. The parameter scaling the gradient is selected as the spectral gradient. Computational results for a set consisting of 750 test unconstrained optimization problems show that this new scaled conjugate gradient algorithm substantially outperforms known conjugate gradient methods such as the spectral conjugate gradient SCG of Birgin and Martínez [E. Birgin, J.M. Martínez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Optim. 43 (2001) 117–128] and the (classical) conjugate gradient of Polak and Ribière [E. Polak, G. Ribière, Note sur la convergence de méthodes de directions conjuguées, Revue Francaise Informat. Reserche Opérationnelle, 3e Année 16 (1969) 35–43], but subject to the CPU time metric it is outperformed by L-BFGS [D. Liu, J. Nocedal, On the limited memory BFGS method for large scale optimization, Math. Program. B 45 (1989) 503–528; J. Nocedal. http://www.ece.northwestern.edu/~nocedal/lbfgs.html].  相似文献   

14.
ON THE CONVERGENCE OF PARALLEL BFGS METHOD   总被引:1,自引:0,他引:1  
ONTHECONVERGENCEOFPARALLELBFGSMETHODChenZhongFeiPusheng(DepartmentofMathematics,WuhanUniversity,Wuhan430072,China.)ZhouYuncai...  相似文献   

15.
非凸函数极小问题的BFGS算法   总被引:1,自引:0,他引:1  
本对于非凸函数的无约束优化问题,给出一类修正的BFGS算法。算法的思想是对非凸函数的近似Hesse矩阵进行修正,得到下降方向,并且保证拟牛顿条件成立,当步长采用线性搜索一般模型时,证明了该算法的局部收敛性。  相似文献   

16.
Since 1965, there has been significant progress in the theoretical study on quasi-Newton methods for solving nonlinear equations, especially in the local convergence analysis. However, the study on global convergence of quasi-Newton methods is relatively fewer, especially for the BFGS method. To ensure global convergence, some merit function such as the squared norm merit function is typically used. In this paper, we propose an algorithm for solving nonlinear monotone equations, which combines the BFGS method and the hyperplane projection method. We also prove that the proposed BFGS method converges globally if the equation is monotone and Lipschitz continuous without differentiability requirement on the equation, which makes it possible to solve some nonsmooth equations. An attractive property of the proposed method is that its global convergence is independent of any merit function.We also report some numerical results to show efficiency of the proposed method.

  相似文献   


17.
Satisfying in the sufficient descent condition is a strength of a conjugate gradient method. Here, it is shown that under the Wolfe line search conditions the search directions generated by the memoryless BFGS conjugate gradient algorithm proposed by Shanno satisfy the sufficient descent condition for uniformly convex functions.  相似文献   

18.
研究目标函数是若干光滑函数和的可分离优化问题,提出了一种单位化增量梯度算法.该算法每次子迭代只需要计算一个(或几个)分量函数的单位负梯度方向作为迭代方向.在一定条件下,证明了采用发散步长的单位化增量梯度算法的收敛性.作为应用,新算法和Bertsekas D P,Tsitsikils J N提出的(没有单位化)增量梯度算...  相似文献   

19.
A new diagonal quasi-Newton updating algorithm for unconstrained optimization is presented. The elements of the diagonal matrix approximating the Hessian are determined as scaled forward finite differences directional derivatives of the components of the gradient. Under mild classical assumptions, the convergence of the algorithm is proved to be linear. Numerical experiments with 80 unconstrained optimization test problems, of different structures and complexities, as well as five applications from MINPACK-2 collection, prove that the suggested algorithm is more efficient and more robust than the quasi-Newton diagonal algorithm retaining only the diagonal elements of the BFGS update, than the weak quasi-Newton diagonal algorithm, than the quasi-Cauchy diagonal algorithm, than the diagonal approximation of the Hessian by the least-change secant updating strategy and minimizing the trace of the matrix, than the Cauchy with Oren and Luenberger scaling algorithm in its complementary form (i.e. the Barzilai-Borwein algorithm), than the steepest descent algorithm, and than the classical BFGS algorithm. However, our algorithm is inferior to the limited memory BFGS algorithm (L-BFGS).  相似文献   

20.
本文对凸函数在极值点的Hessian矩阵是秩亏一的情况下,给出了一类求解无约束优化问题的修正BFGS算法.算法的思想是对凸函数加上一个修正项,得到一个等价的模型,然后简化此模型得到一个修正的BFGS算法.文中证明了该算法是一个具有超线性收敛的算法,并且把修正的BFGS算法同Tensor方法进行了数值比较,证明了该算法对求解秩亏一的无约束优化问题更有效.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号