首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper concerns the memoryless quasi-Newton method, that is precisely the quasi-Newton method for which the approximation to the inverse of Hessian, at each step, is updated from the identity matrix. Hence its search direction can be computed without the storage of matrices. In this paper, a scaled memoryless symmetric rank one (SR1) method for solving large-scale unconstrained optimization problems is developed. The basic idea is to incorporate the SR1 update within the framework of the memoryless quasi-Newton method. However, it is well-known that the SR1 update may not preserve positive definiteness even when updated from a positive definite matrix. Therefore we propose the memoryless SR1 method, which is updated from a positive scaled of the identity, where the scaling factor is derived in such a way that positive definiteness of the updating matrices are preserved and at the same time improves the condition of the scaled memoryless SR1 update. Under very mild conditions it is shown that, for strictly convex objective functions, the method is globally convergent with a linear rate of convergence. Numerical results show that the optimally scaled memoryless SR1 method is very encouraging.  相似文献   

2.
Two basic disadvantages of the symmetric rank one (SR1) update are that the SR1 update may not preserve positive definiteness when starting with a positive definite approximation and the SR1 update can be undefined. A simple remedy to these problems is to restart the update with the initial approximation, mostly the identity matrix, whenever these difficulties arise. However, numerical experience shows that restart with the identity matrix is not a good choice. Instead of using the identity matrix we used a positive multiple of the identity matrix. The used positive scaling factor is the optimal solution of the measure defined by the problem—maximize the determinant of the update subject to a bound of one on the largest eigenvalue. This measure is motivated by considering the volume of the symmetric difference of the two ellipsoids, which arise from the current and updated quadratic models in quasi-Newton methods. A replacement in the form of a positive multiple of the identity matrix is provided for the SR1 update when it is not positive definite or undefined. Our experiments indicate that with such simple initial scaling the possibility of an undefined update or the loss of positive definiteness for the SR1 method is avoided on all iterations.  相似文献   

3.
SR1更新公式对比其他的拟牛顿更新公式,会更加简单且每次迭代需要更少的计算量。但是一般SR1更新公式的收敛性质是在一致线性无关这一很强的条件下证明的。基于前人的研究成果,提出了一种新的修正SR1公式,并分别证明了其在一致线性无关和没有一致线性无关这两个条件下的局部收敛性,最后通过数值实验验证了提出的更新公式的有效性,以及所作出假设的合理性。根据实验数据显示,在某些条件下基于所提出更新公式的拟牛顿算法会比基于传统的SR1更新公式的算法收敛效果更好一些。  相似文献   

4.
In this paper, a switching method for unconstrained minimization is proposed. The method is based on the modified BFGS method and the modified SR1 method. The eigenvalues and condition numbers of both the modified updates are evaluated and used in the switching rule. When the condition number of the modified SR1 update is superior to the modified BFGS update, the step in the proposed quasi-Newton method is the modified SR1 step. Otherwise the step is the modified BFGS step. The efficiency of the proposed method is tested by numerical experiments on small, medium and large scale optimization. The numerical results are reported and analyzed to show the superiority of the proposed method.  相似文献   

5.
Symmetric rank-one (SR1) is one of the competitive formulas among the quasi-Newton (QN) methods. In this paper, we propose some modified SR1 updates based on the modified secant equations, which use both gradient and function information. Furthermore, to avoid the loss of positive definiteness and zero denominators of the new SR1 updates, we apply a restart procedure to this update. Three new algorithms are given to improve the Hessian approximation with modified secant equations for the SR1 method. Numerical results show that the proposed algorithms are very encouraging and the advantage of the proposed algorithms over the standard SR1 and BFGS updates is clearly observed.  相似文献   

6.
The convergence behavior of quasi-Newton methods has been well investigated for many update rules. One exception that has to be examined is the PSB update in Hilbert space. Analogous to the SR1 update, the PSB update takes advantage of the symmetry property of the operator, but it does not require the positive definiteness of the operator to work with. These properties are of great practical importance, for example, to solve minimization problems where the starting operator is not positive definite, which is necessary for other updates to ensure local convergence. In this paper, a Kantorovich theorem is presented for a structured PSB update in Hilbert space, where the structure is exploited in the sense of Dennis and Walker. Finally, numerical implications are illustrated by various results on an optimal shape design problem.  相似文献   

7.
In this paper, the necessary optimality conditions for an unconstrained optimal control problem are used to derive a quasi-Newton method where the update involves only second-order derivative terms. A pointwise update which was presented in a previous paper by the authors is changed to allow for more general second-order sufficiency conditions in the control problem. In particular, pointwise versions of the Broyden, PSB, and SR1 update are considered. A convergence rate theorem is given for the Broyden and PSB versions.This research was supported by NSF Grant No. DMS-89-00410, by NSF Grant No. INT-88-00560, by AFOSR Grant No. AFOSR-89-0044, and by the Deutsche Forschungsgemeinschaft.  相似文献   

8.
We propose a new choice for the parameter in the Broyden class and derive and discuss properties of the resulting self-complementary quasi-Newton update. Our derivation uses a variational principle that minimizes the extent to which the quasi-Newton relation is violated on a prior step. We discuss the merits of the variational principle used here vis-a-vis the other principle in common use, which minimizes deviation from the current Hessian or Hessian inverse approximation in an appropriate Frobenius matrix norm. One notable advantage of our principle is an inherent symmetry that results in the same update being obtained regardless of whether the Hessian matrix or the inverse Hessian matrix is updated.We describe the relationship of our update to the BFGS, SR1 and DFP updates under particular assumptions on line search accuracy, type of function being minimized (quadratic or nonquadratic) and norm used in the variational principle.Some considerations concerning implementation are discussed and we also give a numerical illustration based on an experimental implementation using MATLAB.Corresponding author.  相似文献   

9.
We present a quasi-Newton sequential quadratic programming (SQP) algorithm for nonlinear programs in which the Hessian of the Lagrangian function is block-diagonal. Problems with this characteristic frequently arise in the context of optimal control; for example, when a direct multiple shooting parametrization is used. In this article, we describe an implementation of a filter line-search SQP method that computes search directions using an active-set quadratic programming (QP) solver. To take advantage of the block-diagonal structure of the Hessian matrix, each block is approximated separately by quasi-Newton updates. For nonconvex instances, that arise, for example, in optimum experimental design control problems, these blocks are often found to be indefinite. In that case, the block-BFGS quasi-Newton update can lead to poor convergence. The novel aspect in this work is the use of SR1 updates in place of BFGS approximations whenever possible. The resulting indefinite QPs necessitate an inertia control mechanism within the sparse Schur-complement factorization that is carried out by the active-set QP solver. This permits an adaptive selection of the Hessian approximation that guarantees sufficient progress towards a stationary point of the problem. Numerical results demonstrate that the proposed approach reduces the number of SQP iterations and CPU time required for the solution of a set of optimal control problems.  相似文献   

10.
In this paper we propose a subspace limited memory quasi-Newton method for solving large-scale optimization with simple bounds on the variables. The limited memory quasi-Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. The search direction consists of three parts: a subspace quasi-Newton direction, and two subspace gradient and modified gradient directions. Our algorithm can be applied to large-scale problems as there is no need to solve any subproblems. The global convergence of the method is proved and some numerical results are also given.

  相似文献   


11.
12.
Sebastian Schlenkrich  Andrea Walther 《PAMM》2007,7(1):2020091-2020092
In this paper the concepts of partitioned quasi-Newton methods are applied to adjoint Broyden updates. Consequently a corresponding partitioned adjoint Broyden update is presented and local convergence results are given. Numerical results compare the partitioned adjoint Broyden update methods to the corresponding unpartitioned quasi-Newton method and to Newton's method for nonlinear equations. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

13.
1.IntroductionAlinesearchmethodforminimizingarealfunctionfgeneratesasequencex1,x2,.-.ofpointsbyapplyingtheiterationxk+i=xk+akpk,k=1,2,....(1)Inaquasi-NewtonmethodthesearchdirectionpkischosensothatBkpk=-gk,whereBkis(usually)apositivedefiIiltematrirandgkdenotesVf(xk)-FortheBFGSupdate,(see[21,forexample),thematricesBkaredefinedbytheformulawheresk=xk+1-xkandyk=gk+1-gk'ItiswellknownthatifB1ispositivedefiniteands[Y*>O(3)thenallmatricesBk+1,k=1,2,...generatedby(2)arepositivedefiinte.Thuspkisadi…  相似文献   

14.
In this paper a generalization of the quasi-Newton equationis considered. The symmetric solution which is minimum in aweighted Frobenius norm is presented along with the completesymmetric solution. These two solutions are then shown to containa number of symmetric quasi-Newton update formulas includingupdates which are orthogonal to a given subspace.  相似文献   

15.
This paper examines a type of symmetric quasi-Newton update for use in nonlinear optimization algorithms. The updates presented here impose additional properties on the Hessian approximations that do not result if the usual quasi-Newton updating schemes are applied to certain Gibbs free energy minimization problems. The updates derived in this paper are symmetric matrices that satisfy a given matrix equation and are least squares solutions to the secant equation. A general representation for this class of updates is given. The update in this class that has the minimum weighted Frobenius norm is also presented. This work was done at Sandia National Laboratories and supported by the US Dept. of Energy under contract no. DE-AC04-76DP00789.  相似文献   

16.
The paper compares a factorized sparse quasi-Newton update of Goldfarb with a nonfactorized BFGS sparse update of Shanno on a series of test problems, with numerical results strongly favoring the unfactorized update. Analysis of Goldfarb's method is done to explain the poor numerical performance. Two specific conjugate gradient methods for solving the required systems of linear equations with the unfactorized update are described and tested.This research was supported by the National Science Foundation under Grant No. MCS-77-07327  相似文献   

17.
This paper is concerned with two questions relating to quasi-Newton updates for unconstrained optimization that exploit any sparsity present in the second derivative matrix of the objective function. First, a family of such updates is derived, that reduces to any a priori known dense update formula when no sparsity is imposed. This family uses the Frobenius projection of the desired update on the subspace of matrices that satisfy all the needed conditions. In the second part, we prove that, under mild assumptions, a positive definite sparse quasi-Newton update always exists. The proof of this result includes the explicit determination of such an update.  相似文献   

18.
Techniques for obtaining safely positive definite Hessian approximations with self-scaling and modified quasi-Newton updates are combined to obtain ??better?? curvature approximations in line search methods for unconstrained optimization. It is shown that this class of methods, like the BFGS method, has the global and superlinear convergence for convex functions. Numerical experiments with this class, using the well-known quasi-Newton BFGS, DFP and a modified SR1 updates, are presented to illustrate some advantages of the new techniques. These experiments show that the performance of several combined methods are substantially better than that of the standard BFGS method. Similar improvements are also obtained if the simple sufficient function reduction condition on the steplength is used instead of the strong Wolfe conditions.  相似文献   

19.
A GENERALIZED QUASI-NEWTON EQUATION AND COMPUTATIONAL EXPERIENCE   总被引:1,自引:0,他引:1  
The quasi-Newton equation has played a central role in the quasi-Newton methods forsolving systems of nonlinear equations and/or unconstrained optimization problems.In-stead,Pan suggested a new equation,and showed that it is of the second order while thetraditional of the first order,in certain approximation sense[12].In this paper,we makea generalization of the two equations to include them as special cases.The generalizedequation is analyzed,and new updates are derived from it.A DFP-like new update out-performed the traditional DFP update in computational experiments on a set of standardtest problems.  相似文献   

20.
In this paper, the calibration of the non linear Lotka–Volterra model is used to compare the robustness and efficiency (CPU time) of different optimisation algorithms.Five versions of a quasi-Newton trust-region algorithm are developed and compared with a widely used quasi-Newton method. The trust-region algorithms is more robust and three of them are numerically cheaper than the more usual line search approach.Computation of the first derivatives of the objective function is cheaper with the backward differentiation (or adjoint model) technique than with the forward method as soon as the number of parameter is greater than a few ones. In the optimisation problem, the additional information about the Jacobian matrix made available by the forward method reduces the number of iterations but does not compensate for the increased numerical costs.A quasi-Newton trust-region algorithm with backward differentiation and BFGS update after both successful and unsuccessful iterations represents a robust and efficient algorithm that can be used to calibrate very demanding dynamic models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号