首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
For solving unconstrained minimization problems, quasi-Newton methods are popular iterative methods. The secant condition which employs only the gradient information is imposed on these methods. Several researchers paid attention to other secant conditions to get a better approximation of the Hessian matrix of the objective function. Recently, Zhang et al. [New quasi-Newton equation and related methods for unconstrained optimization, J. Optim. Theory Appl. 102 (1999) 147–167] and Zhang and Xu [Properties and numerical performance of quasi-Newton methods with modified quasi-Newton equations, J. Comput. Appl. Math. 137 (2001) 269–278] proposed the modified secant condition which uses both gradient and function value information in order to get a higher order accuracy in approximating the second curvature of the objective function. They showed the local and q-superlinear convergence property of the BFGS-like and DFP-like updates based on their proposed secant condition. In this paper, we incorporate one parameter into this secant condition to smoothly switch the standard secant condition and the secant condition of Zhang et al. We consider a modified Broyden family which includes the BFGS-like and the DFP-like updates proposed by Zhang et al. We prove the local and q-superlinear convergence of our method.  相似文献   

2.
A GENERALIZED QUASI-NEWTON EQUATION AND COMPUTATIONAL EXPERIENCE   总被引:1,自引:0,他引:1  
The quasi-Newton equation has played a central role in the quasi-Newton methods forsolving systems of nonlinear equations and/or unconstrained optimization problems.In-stead,Pan suggested a new equation,and showed that it is of the second order while thetraditional of the first order,in certain approximation sense[12].In this paper,we makea generalization of the two equations to include them as special cases.The generalizedequation is analyzed,and new updates are derived from it.A DFP-like new update out-performed the traditional DFP update in computational experiments on a set of standardtest problems.  相似文献   

3.
《Optimization》2012,61(11):2081-2089
ABSTRACT

We provide a maximum entropy derivation of a new family of BFGS-like methods. Similar results are then derived for block BFGS methods. This also yields an independent proof of a result of Fletcher 1991 and its generalization to the block case.  相似文献   

4.
We consider a class of weighted gradient methods for distributed resource allocation over a network. Each node of the network is associated with a local variable and a convex cost function; the sum of the variables (resources) across the network is fixed. Starting with a feasible allocation, each node updates its local variable in proportion to the differences between the marginal costs of itself and its neighbors. We focus on how to choose the proportional weights on the edges (scaling factors for the gradient method) to make this distributed algorithm converge and on how to make the convergence as fast as possible.We give sufficient conditions on the edge weights for the algorithm to converge monotonically to the optimal solution; these conditions have the form of a linear matrix inequality. We give some simple, explicit methods to choose the weights that satisfy these conditions. We derive a guaranteed convergence rate for the algorithm and find the weights that minimize this rate by solving a semidefinite program. Finally, we extend the main results to problems with general equality constraints and problems with block separable objective function.The authors are grateful to Professor Paul Tseng and the anonymous referee for their valuable comments that helped us to improve the presentation of this paper.Communicated by P. Tseng  相似文献   

5.
A new class of quasi-Newton methods is introduced that can locate a unique stationary point of ann-dimensional quadratic function in at mostn steps. When applied to positive-definite or negative-definite quadratic functions, the new class is identical to Huang's symmetric family of quasi-Newton methods (Ref. 1). Unlike the latter, however, the new family can handle indefinite quadratic forms and therefore is capable of solving saddlepoint problems that arise, for instance, in constrained optimization. The novel feature of the new class is a planar iteration that is activated whenever the algorithm encounters a near-singular direction of search, along which the objective function approaches zero curvature. In such iterations, the next point is selected as the stationary point of the objective function over a plane containing the problematic search direction, and the inverse Hessian approximation is updated with respect to that plane via a new four-parameter family of rank-three updates. It is shown that the new class possesses properties which are similar to or which generalize the properties of Huang's family. Furthermore, the new method is equivalent to Fletcher's (Ref. 2) modified version of Luenberger's (Ref. 3) hyperbolic pairs method, with respect to the metric defined by the initial inverse Hessian approximation. Several issues related to implementing the proposed method in nonquadratic cases are discussed.An earlier version of this paper was presented at the 10th Mathematical Programing Symposium, Montreal, Canada, 1979.  相似文献   

6.
There are potential advantages in formulating the routing problems in modern multiservice networks as multiple objective problems. This paper presents a novel hierarchical bi-level multiobjective dynamic routing model for multiservice networks. It is based on a bi-objective shortest path algorithm, with dynamically adapted soft-constraints, to compute alternative paths for each node pair and on a heuristic to synchronously select alternative routing plans for the network in a dynamic alternative routing context. It is a routing method which periodically changes alternative paths as a function of periodic updates of certain QoS related parameters obtained from real-time measurements. The performance of the proposed routing method is compared with two reference dynamic routing methods namely RTNR and DAR by means of a discrete-event simulator.A previous short version of this work was presented at INOC’03 (International Network Optimisation Conference). Work partially supported by programme POSI of the III EC programme cosponsored by FEDER and national funds.  相似文献   

7.
This paper is concerned with the solution of the nonlinear least squares problems. A new secant method is suggested in this paper, which is based on an affine model of the objective function and updates the first-order approximation each step when the iterations proceed. We present an algorithm which combines the new secant method with Gauss-Newton method for general nonlinear least squares problems. Furthermore, we prove that this algorithm is Q-superlinearly convergent for large residual problems under mild conditions.  相似文献   

8.
This paper presents a mathematical model and simulated annealing based solution approach for finding optimal location updates and paging area configuration for mobile communication networks. We use a two-layered zone-based location registration and paging scheme in which the costs of location updates and paging signaling traffic are reduced by introducing a two-step paging process. The location updates and paging procedures in a two-layered scheme are first described, and an approximation of the measure required for calculating the paging-related signaling volume is provided based on assumptions of cell shapes and mobile stations’ movement patterns. A simulated annealing (SA)-based solution method is devised along with a greedy heuristic, and computational experiments are conducted to illustrate the superiority of the proposed SA-based method over other solution methods.  相似文献   

9.
A family of variable metric proximal methods   总被引:5,自引:0,他引:5  
We consider conceptual optimization methods combining two ideas: the Moreau—Yosida regularization in convex analysis, and quasi-Newton approximations of smooth functions. We outline several approaches based on this combination, and establish their global convergence. Then we study theoretically the local convergence properties of one of these approaches, which uses quasi-Newton updates of the objective function itself. Also, we obtain a globally and superlinearly convergent BFGS proximal method. At each step of our study, we single out the assumptions that are useful to derive the result concerned.  相似文献   

10.
A Class of Collinear Scaling Algorithms for Unconstrained Optimization. An appealing approach to the solution of nonlinear optimization problems based on conic models of the objective function has been in troduced by Davidon (1980). It leads to a broad class of algorithms which can be considered to generalize the existing quasi-Newton methods. One particular member of this class has been deeply discussed by Sorensen (1980), who has proved some interesting theoretical properties. In this paper, we generalize Sorensen's technique to Spedicato three-parameter family of variable-metric updates. Furthermore, we point out that the collinear scaling three- parameter family is essentially equivalent to the Spedicato three-parameter family. In addition, numerical expriments have been carried out to compare some colliner scaling algorithms with a straightforward implementation of the BFGS quasi-Newton method.  相似文献   

11.
Symmetric rank-one (SR1) is one of the competitive formulas among the quasi-Newton (QN) methods. In this paper, we propose some modified SR1 updates based on the modified secant equations, which use both gradient and function information. Furthermore, to avoid the loss of positive definiteness and zero denominators of the new SR1 updates, we apply a restart procedure to this update. Three new algorithms are given to improve the Hessian approximation with modified secant equations for the SR1 method. Numerical results show that the proposed algorithms are very encouraging and the advantage of the proposed algorithms over the standard SR1 and BFGS updates is clearly observed.  相似文献   

12.
A new decomposition optimization algorithm, called path-following gradient-based decomposition, is proposed to solve separable convex optimization problems. Unlike path-following Newton methods considered in the literature, this algorithm does not require any smoothness assumption on the objective function. This allows us to handle more general classes of problems arising in many real applications than in the path-following Newton methods. The new algorithm is a combination of three techniques, namely smoothing, Lagrangian decomposition and path-following gradient framework. The algorithm decomposes the original problem into smaller subproblems by using dual decomposition and smoothing via self-concordant barriers, updates the dual variables using a path-following gradient method and allows one to solve the subproblems in parallel. Moreover, compared to augmented Lagrangian approaches, our algorithmic parameters are updated automatically without any tuning strategy. We prove the global convergence of the new algorithm and analyze its convergence rate. Then, we modify the proposed algorithm by applying Nesterov’s accelerating scheme to get a new variant which has a better convergence rate than the first algorithm. Finally, we present preliminary numerical tests that confirm the theoretical development.  相似文献   

13.
14.
A Modified Barrier-Augmented Lagrangian Method for Constrained Minimization   总被引:4,自引:0,他引:4  
We present and analyze an interior-exterior augmented Lagrangian method for solving constrained optimization problems with both inequality and equality constraints. This method, the modified barrier—augmented Lagrangian (MBAL) method, is a combination of the modified barrier and the augmented Lagrangian methods. It is based on the MBAL function, which treats inequality constraints with a modified barrier term and equalities with an augmented Lagrangian term. The MBAL method alternatively minimizes the MBAL function in the primal space and updates the Lagrange multipliers. For a large enough fixed barrier-penalty parameter the MBAL method is shown to converge Q-linearly under the standard second-order optimality conditions. Q-superlinear convergence can be achieved by increasing the barrier-penalty parameter after each Lagrange multiplier update. We consider a dual problem that is based on the MBAL function. We prove a basic duality theorem for it and show that it has several important properties that fail to hold for the dual based on the classical Lagrangian.  相似文献   

15.
We propose a method that incorporates a non-Euclidean gradient descent step with a generic matrix sketching procedure, for solving unconstrained, nonconvex, matrix optimization problems, in which the decision variable cannot be saved in memory due to its size, and the objective function is the composition of a vector function on a linear operator. The method updates the sketch directly without updating or storing the decision variable. Subsequence convergence, global convergence under the Kurdyka–Lojasiewicz property, and rate of convergence, are established.  相似文献   

16.
Quasi-Newton equations play a central role in quasi-Newton methods for optimization and various quasi-Newton equations are available. This paper gives a survey on these quasi-Newton equations and studies properties of quasi-Newton methods with updates satisfying different quasi-Newton equations. These include single-step quasi-Newton equations that use only gradient information and that use both gradient and function value information in one step, and multi-step quasi-Newton equations that use the gradient information in last m steps. Main properties of quasi-Newton methods with updates satisfying different quasi-Newton equations are studied. These properties include the finite termination property, invariance, heredity of positive definite updates, consistency of search directions, global convergence and local superlinear convergence properties.  相似文献   

17.
Local convergence analysis for partitioned quasi-Newton updates   总被引:8,自引:0,他引:8  
Summary This paper considers local convergence properties of inexact partitioned quasi-Newton algorithms for the solution of certain non-linear equations and, in particular, the optimization of partially separable objective functions. Using the bounded deterioration principle, one obtains local and linear convergence, which impliesQ-superlinear convergence under the usual conditions on the quasi-Newton updates. For the optimization case, these conditions are shown to be satisfied by any sequence of updates within the convex Broyden class, even if some Hessians are singular at the minimizer. Finally, local andQ-superlinear convergence is established for an inexact partitioned variable metric method under mild assumptions on the initial Hessian approximations.Work supported by a research grant of the Deutsche Forschungsgemeinschaft, Bonn and carried out at the Department of Applied Mathematics and Theoretical Physics Cambridge (United Kingdom)  相似文献   

18.
A new method for continuous global minimization problems, acronymed SCM, is introduced. This method gives a simple transformation to convert the objective function to an auxiliary function with gradually fewer local minimizers. All Local minimizers except a prefixed one of the auxiliary function are in the region where the function value of the objective function is lower than its current minimal value. Based on this method, an algorithm is designed which uses a local optimization method to minimize the auxiliary function to find a local minimizer at which the value of the objective function is lower than its current minimal value. The algorithm converges asymptotically with probability one to a global minimizer of the objective function. Numerical experiments on a set of standard test problems with several problems' dimensions up to 50 show that the algorithm is very efficient compared with other global optimization methods.  相似文献   

19.
A fast descent algorithm, resorting to a “stretching” function technique and built on one hybrid method (GRSA) which combines simulated annealing (SA) algorithm and gradient based methods for large scale global optimizations, is proposed. Unlike the previously proposed method in which the original objective functions remain unchanged during the whole course of optimization, the new method firstly constructs an auxiliary function on one local minimizer obtained by gradient based methods and then SA is executed on this constructed auxiliary function instead of on the original objective function in order that we can improve the jumping ability of SA algorithm to escape from the currently discovered local minimum to a better one from which the gradient based methods restart a new local search. The above procedure is repeated until a global minimum is detected. In addition, corresponding to the adopted “stretching” technique, a new next trial point generating scheme is designed. It is verified by simulation especially on large scale problems that the convergence speed is greatly accelerated, which is its main difference from many other reported methods that mostly cope with functions with less than 50 variables and does not apply to large scale optimization problems. Furthermore, the new algorithm functions as a global optimization procedure with a high success probability and high solution precision.  相似文献   

20.
We study the convergence properties of reduced Hessian successive quadratic programming for equality constrained optimization. The method uses a backtracking line search, and updates an approximation to the reduced Hessian of the Lagrangian by means of the BFGS formula. Two merit functions are considered for the line search: the 1 function and the Fletcher exact penalty function. We give conditions under which local and superlinear convergence is obtained, and also prove a global convergence result. The analysis allows the initial reduced Hessian approximation to be any positive definite matrix, and does not assume that the iterates converge, or that the matrices are bounded. The effects of a second order correction step, a watchdog procedure and of the choice of null space basis are considered. This work can be seen as an extension to reduced Hessian methods of the well known results of Powell (1976) for unconstrained optimization.This author was supported, in part, by National Science Foundation grant CCR-8702403, Air Force Office of Scientific Research grant AFOSR-85-0251, and Army Research Office contract DAAL03-88-K-0086.This author was supported by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under contracts W-31-109-Eng-38 and DE FG02-87ER25047, and by National Science Foundation Grant No. DCR-86-02071.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号