首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Techniques for obtaining safely positive definite Hessian approximations with self-scaling and modified quasi-Newton updates are combined to obtain ??better?? curvature approximations in line search methods for unconstrained optimization. It is shown that this class of methods, like the BFGS method, has the global and superlinear convergence for convex functions. Numerical experiments with this class, using the well-known quasi-Newton BFGS, DFP and a modified SR1 updates, are presented to illustrate some advantages of the new techniques. These experiments show that the performance of several combined methods are substantially better than that of the standard BFGS method. Similar improvements are also obtained if the simple sufficient function reduction condition on the steplength is used instead of the strong Wolfe conditions.  相似文献   

2.
In this paper, we propose new members of the Broyden family of quasi-Newton methods. We develop, on the basis of well-known least-change results for the BFGS and DFP updates, a measure for the Broyden family which seeks to take into account the change in both the Hessian approximation and its inverse. The proposal is then to choose the formula which gives the least value of this measure in terms of the two parameters available, and hence to produce an update which is optimal in the sense of the given measure. Several approaches to the problem of minimizing the measure are considered, from which new updates are obtained. In particular, one approach yields a new variational result for the Davidon optimally conditioned method and another yields a reasonable modification to this method. The paper is also concerned with the possibility of estimating, in a certain sense, the size of the eigenvalues of the Hessian approximation on the basis of two available scalars. This allows one to derive further modifications to the above-mentioned methods. Comparisons with the BFGS and Davidson methods are made on a set of standard test problems that show promising results for certain new methods.Part of this work was done during the author's visits at International Centre for Theoretical Physics, Trieste, Italy, at Systems Department, University of Calabria, Cosenza, Italy, and at Ajman University College of Science and Technology, Ajman, United Arab Emirates.The author expresses his gratitude to Professor L. Grandinetti for his encouragement and thanks the anonymous referees for their careful reading of an earlier draft of the paper and valuable comments, which led to a substantial improvement of the original paper.  相似文献   

3.
In previous work, the authors provided a foundation for the theory of variable metric proximal point algorithms in Hilbert space. In that work conditions are developed for global, linear, and super–linear convergence. This paper focuses attention on two matrix secant updating strategies for the finite dimensional case. These are the Broyden and BFGS updates. The BFGS update is considered for application in the symmetric case, e.g., convex programming applications, while the Broyden update can be applied to general monotone operators. Subject to the linear convergence of the iterates and a quadratic growth condition on the inverse of the operator at the solution, super–linear convergence of the iterates is established for both updates. These results are applied to show that the Chen–Fukushima variable metric proximal point algorithm is super–linearly convergent when implemented with the BFGS update. Received: September 12, 1996 / Accepted: January 7, 2000?Published online March 15, 2000  相似文献   

4.
In this study, we develop and test a strategy for selectively sizing (multiplying by an appropriate scalar) the approximate Hessian matrix before it is updated in the BFGS and DFP trust-region methods for unconstrained optimization. Our numerical results imply that, for use with the DFP update, the Oren-Luenberger sizing factor is completely satisfactory and selective sizing is vastly superior to the alternatives of never sizing or first-iteration sizing and is slightly better than the alternative of always sizing. Numerical experimentation showed that the Oren-Luenberger sizing factor is not a satisfactory sizing factor for use with the BFGS update. Therefore, based on our newly acquired understanding of the situation, we propose a centered Oren-Luenberger sizing factor to be used with the BFGS update. Our numerical experimentation implies that selectively sizing the BFGS update with the centered Oren-Luenberger sizing factor is superior to the alternatives. These results contradict the folk axiom that sizing should be done only at the first iteration. They also show that, without sufficient sizing, DFP is vastly inferior to BFGS; however, when selectively sized, DFP is competitive with BFGS.This research was supported in part by NSF Cooperative Agreement No. CCR-88-09615, AFOSR Grant 89-0363, DOE Grant DEFG05-86-ER25017, and AR0 Grant 9DAAL03-90-G-0093. This paper was presented at the ICIAM 91 International Conference, Washington DC, July 1991.The authors thank all those individuals who took part in the lively discussions concerning this material following the ICIAM 91 presentation. These discussions influenced the current version of the paper. The first author also thanks Maria Cristina Maciel for assistance and support during the earlier stages of the research.This author is currently a Graduate Student in the Mathematics Department, University of California at Riverside, Riverside, California, and has participated in the Summer Student Visitor Program at the Center for Research in Parallel Computation of Rice University for the past two years.  相似文献   

5.
We study the use of the BFGS and DFP algorithms with step-lengths of one for minimizing quadratic functions of only two variables. The updating formulae in this case imply nonlinear three term recurrence relations between the eigenvalues of consecutive second derivative approximations, which are analysed in order to explain some gross inefficiencies that can occur. Specifically, the BFGS algorithm may require more than 10 iterations to achieve the first decimal place of accuracy, while the performance of the DFP method is far worse. The results help to explain why the DFP method is often less suitable than the BFGS algorithm for general unconstrained optimization calculations, and they show that quadratic functions provide much information about efficiency when the current vector of variables is too far from the solution for an asymptotic convergence analysis.  相似文献   

6.
We propose a new choice for the parameter in the Broyden class and derive and discuss properties of the resulting self-complementary quasi-Newton update. Our derivation uses a variational principle that minimizes the extent to which the quasi-Newton relation is violated on a prior step. We discuss the merits of the variational principle used here vis-a-vis the other principle in common use, which minimizes deviation from the current Hessian or Hessian inverse approximation in an appropriate Frobenius matrix norm. One notable advantage of our principle is an inherent symmetry that results in the same update being obtained regardless of whether the Hessian matrix or the inverse Hessian matrix is updated.We describe the relationship of our update to the BFGS, SR1 and DFP updates under particular assumptions on line search accuracy, type of function being minimized (quadratic or nonquadratic) and norm used in the variational principle.Some considerations concerning implementation are discussed and we also give a numerical illustration based on an experimental implementation using MATLAB.Corresponding author.  相似文献   

7.
Quasi-Newton methods are powerful techniques for solving unconstrained minimization problems. Variable metric methods, which include the BFGS and DFP methods, generate dense positive definite approximations and, therefore, are not applicable to large-scale problems. To overcome this difficulty, a sparse quasi-Newton update with positive definite matrix completion that exploits the sparsity pattern of the Hessian is proposed. The proposed method first calculates a partial approximate Hessian , where , using an existing quasi-Newton update formula such as the BFGS or DFP methods. Next, a full matrix H k+1, which is a maximum-determinant positive definite matrix completion of , is obtained. If the sparsity pattern E (or its extension F) has a property related to a chordal graph, then the matrix H k+1 can be expressed as products of some sparse matrices. The time and space requirements of the proposed method are lower than those of the BFGS or the DFP methods. In particular, when the Hessian matrix is tridiagonal, the complexities become O(n). The proposed method is shown to have superlinear convergence under the usual assumptions.   相似文献   

8.
《Journal of Complexity》2002,18(2):557-572
This paper studies recent modifications of the limited memory BFGS (L-BFGS) method for solving large scale unconstrained optimization problems. Each modification technique attempts to improve the quality of the L-BFGS Hessian by employing (extra) updates in a certain sense. Because at some iterations these updates might be redundant or worsen the quality of this Hessian, this paper proposes an updates criterion to measure this quality. Hence, extra updates are employed only to improve the poor approximation of the L-BFGS Hessian. The presented numerical results illustrate the usefulness of this criterion and show that extra updates improve the performance of the L-BFGS method substantially.  相似文献   

9.
1.IntroductionAssumethatwearefindingtheminimizerofthefollowingunconstrainedoptimizationproblemminf(x),(1.1)acReandassumethecurrentpointisxk'TOcalculatexk 1fromxkbyalinesearchmethod,thefollowingiterationXk 1~Xk Akpk,k~1,2,'(1.2)isapplied.IntheBFGSal...  相似文献   

10.
This paper is concerned with collinear scaling algorithms for unconstrained minimization where the underlying local approximants are forced to interpolate the objective function value and gradient at only the two most recent iterates. By suitably modifying the procedure of Sorensen (1980) for deriving such algorithms, we show that two members of the algorithm class derived related to the DFP and BFGS methods respectively are locally and q-superlinearly convergent. This local analysis as well as the results they yield exhibit the same sort of duality exhibited by those of Broyden, Dennis and Moré (1973) and Dennis and Moré (1974) for the DFP and BFGS methods. The results in this paper also imply the local and q-superlinear convergence of collinear scaling algorithms of Sorensen (1982, pp. 154–156) related to the DFP and BFGS methods.Research supported in part by funds provided by the Washington State University Research and Arts Committee, by NSF Grant DMS-8414460 and by DOE Grant DE-FG06-85ER25007.  相似文献   

11.
一族超线性收敛的投影拟牛顿算法   总被引:5,自引:0,他引:5  
本文将梯度投影与拟牛顿法相结合,给出了求解一般线性约束非线性规划问题含两组参数的算法族.在一定的条件下证明了算法族的全局收敛性与它的子族的超线性收敛速度,并给出了投影D.F.P方法、投影BFGS方法等一些特例.  相似文献   

12.
We propose an image restoration method. The method generalizes image restoration algorithms that are based on the Moore–Penrose solution of certain matrix equations that define the linear motion blur. Our approach is based on the usage of least squares solutions of these matrix equations, wherein an arbitrary matrix of appropriate dimensions is included besides the Moore–Penrose inverse. In addition, the method is a useful tool for improving results obtained by other image restoration methods. Toward that direction, we investigate the case where the arbitrary matrix is replaced by the matrix obtained by the Haar basis reconstructed image. The method has been tested by reconstructing an image after the removal of blur caused by the uniform linear motion and filtering the noise that is corrupted with the image pixels. The quality of the restoration is observable by a human eye. Benefits of using the method are illustrated by the values of the improvement in signal‐to‐noise ratio and in the values of peak signal‐to‐noise ratio. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
In the mid-1960’s, Davidon’s method was brought to the author’s attention by M.J.D. Powell, one of its earliest proponents. Its great efficacy in solving a rather difficult computational problem in which the author was involved led to an attempt to find a “best” updating formula. “Best” seemed to suggest “least” in the sense of some norm, to further the stability of the method. This led to the idea of minimizing a generalized quadratic (Frobenius) norm with the quasi-Newton and symmetry constraints on the updates. Several interesting formulas were derived, including the Davidon-Fletcher-Powell formula (as shown by Goldfarb). This approach was extended to the derivation of updates requiring no derivatives, and to Broyden-like updates for the solution of simultaneous nonlinear equations. Attempts were made to derive minimum-norm corrections in product-form updates, with an eye to preserving positive-definiteness. In the course of this attempt, it was discovered that the DFP formula could be written as a product, leading to some interesting theoretical developments. Finally, a linearized product-form update was developed which was competitive with the best update (BFGS) of that time. Received: May 3, 1999 / Accepted: January 11, 2000?Published online March 15, 2000  相似文献   

14.
A method for solving the Riemann-Hilbert boundary value problem with piecewise-constant coefficients is generalized /1/. It is shown that the following static problems of a composite elastic plane with three kinds of connection conditions allow of exact solutions: 1) the splicing line is weakened by a system of loaded slots and a transverse shear crack or the edges of one of the slots are partially contacting, or one of the slots is cleaved by a rigid insert; 2) the splicing line is reinforced by a system of thin rigid inclusions and there is one arbitrarily located delamination zone; 3) the elastic half-planes are contacting (with slip) on a certain section of their boundaries, and mixed boundary conditions in the displacements and stresses are given on the rest of the boundaries.

In the general case the Riemann-Hilbert boundary value problem for many functions reduces to the problem of a linear conjugation, and then to Fredholm integral Eqs./2/. Closed solutions are obtained in certain special cases /3–5/. For applications we mention the papers /6, 7/, where problems are considered concerning slits at the interface of two elastic media with two kinds of physical boundary conditions taken into account simultaneously.  相似文献   


15.
把正定矩阵关于向量的等内积分解算法应用于求解无约束优化问题的拟牛顿算法中,提出了利用校正矩阵的等内积分解矩阵确定搜索方向的一种新算法和等价于DFP和BFGS校正公式的新的迭代公式.  相似文献   

16.
Local convergence analysis for partitioned quasi-Newton updates   总被引:8,自引:0,他引:8  
Summary This paper considers local convergence properties of inexact partitioned quasi-Newton algorithms for the solution of certain non-linear equations and, in particular, the optimization of partially separable objective functions. Using the bounded deterioration principle, one obtains local and linear convergence, which impliesQ-superlinear convergence under the usual conditions on the quasi-Newton updates. For the optimization case, these conditions are shown to be satisfied by any sequence of updates within the convex Broyden class, even if some Hessians are singular at the minimizer. Finally, local andQ-superlinear convergence is established for an inexact partitioned variable metric method under mild assumptions on the initial Hessian approximations.Work supported by a research grant of the Deutsche Forschungsgemeinschaft, Bonn and carried out at the Department of Applied Mathematics and Theoretical Physics Cambridge (United Kingdom)  相似文献   

17.
A Modified BFGS Algorithm for Unconstrained Optimization   总被引:7,自引:0,他引:7  
In this paper we present a modified BFGS algorithm for unconstrainedoptimization. The BFGS algorithm updates an approximate Hessianwhich satisfies the most recent quasi-Newton equation. The quasi-Newtoncondition can be interpreted as the interpolation conditionthat the gradient value of the local quadratic model matchesthat of the objective function at the previous iterate. Ourmodified algorithm requires that the function value is matched,instead of the gradient value, at the previous iterate. Themodified algorithm preserves the global and local superlinearconvergence properties of the BFGS algorithm. Numerical resultsare presented, which suggest that a slight improvement has beenachieved.  相似文献   

18.
We present a quasi-Newton sequential quadratic programming (SQP) algorithm for nonlinear programs in which the Hessian of the Lagrangian function is block-diagonal. Problems with this characteristic frequently arise in the context of optimal control; for example, when a direct multiple shooting parametrization is used. In this article, we describe an implementation of a filter line-search SQP method that computes search directions using an active-set quadratic programming (QP) solver. To take advantage of the block-diagonal structure of the Hessian matrix, each block is approximated separately by quasi-Newton updates. For nonconvex instances, that arise, for example, in optimum experimental design control problems, these blocks are often found to be indefinite. In that case, the block-BFGS quasi-Newton update can lead to poor convergence. The novel aspect in this work is the use of SR1 updates in place of BFGS approximations whenever possible. The resulting indefinite QPs necessitate an inertia control mechanism within the sparse Schur-complement factorization that is carried out by the active-set QP solver. This permits an adaptive selection of the Hessian approximation that guarantees sufficient progress towards a stationary point of the problem. Numerical results demonstrate that the proposed approach reduces the number of SQP iterations and CPU time required for the solution of a set of optimal control problems.  相似文献   

19.
We present a superlinearly convergent exact penalty method for solving constrained nonlinear least squares problems, in which the projected exact penalty Hessian is approximated by using a structured secant updating scheme. We give general conditions for the two-step superlinear convergence of the algorithm and prove that the projected structured Broyden–Fletcher–Goldfarb–Shanno (BFGS), Powell-symmetric-Broyden (PSB), and Davidon–Fletcher–Powell (DFP) update formulas satisfy these conditions. Then we extend the results to the projected structured convex Broyden family update formulas. Extensive testing results obtained by an implementation of our algorithms, as compared to the results obtained by several other competent algorithms, demonstrate the efficiency and robustness of the proposed approach.  相似文献   

20.
In this paper, a switching method for unconstrained minimization is proposed. The method is based on the modified BFGS method and the modified SR1 method. The eigenvalues and condition numbers of both the modified updates are evaluated and used in the switching rule. When the condition number of the modified SR1 update is superior to the modified BFGS update, the step in the proposed quasi-Newton method is the modified SR1 step. Otherwise the step is the modified BFGS step. The efficiency of the proposed method is tested by numerical experiments on small, medium and large scale optimization. The numerical results are reported and analyzed to show the superiority of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号