首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
Stable techniques are considered for updating the reduced Hessian matrix that arises in a null-space active set method for quadratic programming when the Hessian matrix itself may be indefinite. A scheme for defining and updating the null-space basis matrix is described which is adequately stable and allows advantage to be taken of sparsity in the constraint matrix. A new canonical form for the reduced Hessian matrix is proposed that can be updated in a numerically stable way. Some consequences for the choice of minor iteration search direction are described. Received: February 24, 1999 / Accepted: February 3, 2000?Published online March 15, 2000  相似文献   

2.
This paper studies subspace properties of trust region methods for unconstrained optimization, assuming the approximate Hessian is updated by quasi- Newton formulae and the initial Hessian approximation is appropriately chosen. It is shown that the trial step obtained by solving the trust region subproblem is in the subspace spanned by all the gradient vectors computed. Thus, the trial step can be defined by minimizing the quasi-Newton quadratic model in the subspace. Based on this observation, some subspace trust region algorithms are proposed and numerical results are also reported.  相似文献   

3.
《Optimization》2012,61(12):2229-2246
ABSTRACT

A secant equation (quasi-Newton) has one of the most important rule to find an optimal solution in nonlinear optimization. Curvature information must satisfy the usual secant equation to ensure positive definiteness of the Hessian approximation. In this work, we present a new diagonal updating to improve the Hessian approximation with a modifying weak secant equation for the diagonal quasi-Newton (DQN) method. The gradient and function evaluation are utilized to obtain a new weak secant equation and achieve a higher order accuracy in curvature information in the proposed method. Modified DQN methods based on the modified weak secant equation are globally convergent. Extended numerical results indicate the advantages of modified DQN methods over the usual ones and some classical conjugate gradient methods.  相似文献   

4.
In this paper, we present a new algorithm to accelerate the Chambolle gradient projection method for total variation image restoration. The new proposed method considers an approximation of the Hessian based on the secant equation. Combined with the quasi‐Cauchy equations and diagonal updating, we can obtain a positive definite diagonal matrix. In the proposed minimization method model, we use the positive definite diagonal matrix instead of the constant time stepsize in Chambolle's method. The global convergence of the proposed scheme is proved. Some numerical results illustrate the efficiency of this method. Moreover, we also extend the quasi‐Newton diagonal updating method to solve nonlinear systems of monotone equations. Performance comparisons show that the proposed method is efficient. A practical application of the monotone equations is shown and tested on sparse signal reconstruction in compressed sensing. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
This paper surveys some of the existing approaches to quasi-Newton methods and introduces a new way for constructing inverse Hessian approximations for such algorithms. This new approach is based on restricting Newton's method to subspaces over which the inverse Hessian is assumed to be known, while expanding this subspace using gradient information. It is shown that this approach can lead to some well-known formulas for updating the inverse Hessian approximation. Deriving such updates through this approach provides new understanding of these formulas and their relation to the pseudo-Newton-Raphson algorithm.  相似文献   

6.
《Optimization》2012,61(3):375-389
In this paper we consider two alternative choices for the factor used to scale the initial Hessian approximation, before updating by a member of the Broyden family of updates for quasi-Newton optimization methods. By extensive computational experiments carried out on a set of standard test problems from the CUTE collection, using efficient implemen-tations of the quasi-Newton method, we show that the proposed new scaling factors are better, in terms of efficiency achieved (number of iterations, number of function and gradient evaluations), than the standard choice proposed in the literature  相似文献   

7.
Filter approaches, initially presented by Fletcher and Leyffer in 2002, are attractive methods for nonlinear programming. In this paper, we propose an interior-point barrier projected Hessian updating algorithm with line search filter method for nonlinear optimization. The Lagrangian function value instead of the objective function value is used in the filter. The damped BFGS updating is employed to maintain the positive definiteness of the matrices in projected Hessian updating algorithm. The numerical experiments are reported to show the effectiveness of the proposed algorithm.  相似文献   

8.
In order to apply quasi-Newton methods to solve unconstrained minimization calculations when the number of variables is very large, it is usually necessary to make use of any sparsity in the second derivative matrix of the objective function. Therefore, it is important to extend to the sparse case the updating formulae that occur in variable metric algorithms to revise the estimate of the second derivative matrix. Suitable extensions suggest themselves when the updating formulae are derived by variational methods [1, 3]. The purpose of the present paper is to give a new proof of a theorem of Dennis and Schnabel [1], that shows the effect of sparsity on updating formulae for second derivative estimates.  相似文献   

9.
A new diagonal quasi-Newton updating algorithm for unconstrained optimization is presented. The elements of the diagonal matrix approximating the Hessian are determined as scaled forward finite differences directional derivatives of the components of the gradient. Under mild classical assumptions, the convergence of the algorithm is proved to be linear. Numerical experiments with 80 unconstrained optimization test problems, of different structures and complexities, as well as five applications from MINPACK-2 collection, prove that the suggested algorithm is more efficient and more robust than the quasi-Newton diagonal algorithm retaining only the diagonal elements of the BFGS update, than the weak quasi-Newton diagonal algorithm, than the quasi-Cauchy diagonal algorithm, than the diagonal approximation of the Hessian by the least-change secant updating strategy and minimizing the trace of the matrix, than the Cauchy with Oren and Luenberger scaling algorithm in its complementary form (i.e. the Barzilai-Borwein algorithm), than the steepest descent algorithm, and than the classical BFGS algorithm. However, our algorithm is inferior to the limited memory BFGS algorithm (L-BFGS).  相似文献   

10.
We consider Hessian approximation schemes for large-scale unconstrained optimization in the context of discretized problems. The considered Hessians typically present a nontrivial sparsity and partial separability structure. This allows iterative quasi-Newton methods to solve them despite of their size. Structured finite-difference methods and updating schemes based on the secant equation are presented and compared numerically inside the multilevel trust-region algorithm proposed by Gratton et al. (IMA J. Numer. Anal. 28(4):827–861, 2008).  相似文献   

11.
Newton-type methods for unconstrained optimization problems have been very successful when coupled with a modified Cholesky factorization to take into account the possible lack of positive-definiteness in the Hessian matrix. In this paper we discuss the application of these method to large problems that have a sparse Hessian matrix whose sparsity is known a priori. Quite often it is difficult, if not impossible, to obtain an analytic representation of the Hessian matrix. Determining the Hessian matrix by the standard method of finite-differences is costly in terms of gradient evaluations for large problems. Automatic procedures that reduce the number of gradient evaluations by exploiting sparsity are examined and a new procedure is suggested. Once a sparse approximation to the Hessian matrix has been obtained, there still remains the problem of solving a sparse linear system of equations at each iteration. A modified Cholesky factorization can be used. However, many additional nonzeros (fill-in) may be created in the factors, and storage problems may arise. One way of approaching this problem is to ignore fill-in in a systematic manner. Such technique are calledpartial factorization schemes. Various existing partial factorization are analyzed and three new ones are developed. The above algorithms were tested on a set of problems. The overall conclusions were that these methods perfom well in practice.  相似文献   

12.
The paper examines methods for increasing the dimension of a quasi-Newton approximation to a Hessian matrix when the dimension of the problem is increased. A new method is proposed, and numerical results given to demonstrate the superiority of the new method to existing methods.The work of the first author was partially supported by the Air Force Office of Scientific Research under Grant No. AFOSR-86-0170.  相似文献   

13.
We propose a new choice for the parameter in the Broyden class and derive and discuss properties of the resulting self-complementary quasi-Newton update. Our derivation uses a variational principle that minimizes the extent to which the quasi-Newton relation is violated on a prior step. We discuss the merits of the variational principle used here vis-a-vis the other principle in common use, which minimizes deviation from the current Hessian or Hessian inverse approximation in an appropriate Frobenius matrix norm. One notable advantage of our principle is an inherent symmetry that results in the same update being obtained regardless of whether the Hessian matrix or the inverse Hessian matrix is updated.We describe the relationship of our update to the BFGS, SR1 and DFP updates under particular assumptions on line search accuracy, type of function being minimized (quadratic or nonquadratic) and norm used in the variational principle.Some considerations concerning implementation are discussed and we also give a numerical illustration based on an experimental implementation using MATLAB.Corresponding author.  相似文献   

14.
We consider multi-step quasi-Newton methods for unconstrained optimization. These methods were introduced by Ford and Moghrabi (Appl. Math., vol. 50, pp. 305–323, 1994; Optimization Methods and Software, vol. 2, pp. 357–370, 1993), who showed how interpolating curves could be used to derive a generalization of the Secant Equation (the relation normally employed in the construction of quasi-Newton methods). One of the most successful of these multi-step methods makes use of the current approximation to the Hessian to determine the parameterization of the interpolating curve in the variable-space and, hence, the generalized updating formula. In this paper, we investigate new parameterization techniques to the approximate Hessian, in an attempt to determine a better Hessian approximation at each iteration and, thus, improve the numerical performance of such algorithms.  相似文献   

15.
It is shown how to generate conjugate directions without exact line searches by applying variable metric updating formulas in factorized form to give the columns of the Cholesky factors of the Hessian matrix one at a time. Minimization methods based upon these results are given, including one which can take advantage of sparsity in the matrix of second derivatives of the objective function. The results of some numerical tests on a small sample of test problems are presented.  相似文献   

16.
This paper is aimed to extend the scheme of self scaling, appropriate for the quasi-Newton methods, to the two-step quasi-Newton methods. The scaling scheme has been performed during the main approach of updating the current Hessian approximation and prior to the computation of the next quasi-Newton direction whenever necessary. Global convergence property of the new method is explored on uniformly convex functions with the standard Wolfe line search. Preliminary numerical testing has been performed showing that this technique improves the performance of the two-step method substantially.  相似文献   

17.
The adaptive cubic regularization method (Cartis et al. in Math. Program. Ser. A 127(2):245?C295, 2011; Math. Program. Ser. A. 130(2):295?C319, 2011) has been recently proposed for solving unconstrained minimization problems. At each iteration of this method, the objective function is replaced by a cubic approximation which comprises an adaptive regularization parameter whose role is related to the local Lipschitz constant of the objective??s Hessian. We present new updating strategies for this parameter based on interpolation techniques, which improve the overall numerical performance of the algorithm. Numerical experiments on large nonlinear least-squares problems are provided.  相似文献   

18.
A class of generalized variable penalty formulations for solving nonlinear programming problems is presented. The method poses a sequence of unconstrained optimization problems with mechanisms to control the quality of the approximation for the Hessian matrix, which is expressed in terms of the constraint functions and their first derivatives. The unconstrained problems are solved using a modified Newton's algorithm. The method is particularly applicable to solution techniques where an approximate analysis step has to be used (e.g., constraint approximations, etc.), which often results in the violation of the constraints. The generalized penalty formulation contains two floating parameters, which are used to meet the penalty requirements and to control the errors in the approximation of the Hessian matrix. A third parameter is used to vary the class of standard barrier or quasibarrier functions, forming a branch of the variable penalty formulation. Several possibilities for choosing such floating parameters are discussed. The numerical effectiveness of this algorithm is demonstrated on a relatively large set of test examples.The author is thankful for the constructive suggestions of the referees.  相似文献   

19.
An algorithm was recently presented that minimizes a nonlinear function in several variables using a Newton-type curvilinear search path. In order to determine this curvilinear search path the eigenvalue problem of the Hessian matrix of the objective function has to be solved at each iteration of the algorithm. In this paper an iterative procedure requiring gradient information only is developed for the approximation of the eigensystem of the Hessian matrix. It is shown that for a quadratic function the approximated eigenvalues and eigenvectors tend rapidly to the actual eigenvalues and eigenvectors of its Hessian matrix. The numerical tests indicate that the resulting algorithm is very fast and stable. Moreover, the fact that some approximations to the eigenvectors of the Hessian matrix are available is used to get past saddle points and accelerate the rate of convergence on flat functions.  相似文献   

20.
A new class of quasi-Newton methods is introduced that can locate a unique stationary point of ann-dimensional quadratic function in at mostn steps. When applied to positive-definite or negative-definite quadratic functions, the new class is identical to Huang's symmetric family of quasi-Newton methods (Ref. 1). Unlike the latter, however, the new family can handle indefinite quadratic forms and therefore is capable of solving saddlepoint problems that arise, for instance, in constrained optimization. The novel feature of the new class is a planar iteration that is activated whenever the algorithm encounters a near-singular direction of search, along which the objective function approaches zero curvature. In such iterations, the next point is selected as the stationary point of the objective function over a plane containing the problematic search direction, and the inverse Hessian approximation is updated with respect to that plane via a new four-parameter family of rank-three updates. It is shown that the new class possesses properties which are similar to or which generalize the properties of Huang's family. Furthermore, the new method is equivalent to Fletcher's (Ref. 2) modified version of Luenberger's (Ref. 3) hyperbolic pairs method, with respect to the metric defined by the initial inverse Hessian approximation. Several issues related to implementing the proposed method in nonquadratic cases are discussed.An earlier version of this paper was presented at the 10th Mathematical Programing Symposium, Montreal, Canada, 1979.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号