首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a class of new augmented Lagrangian functions with the essential property that each member is concave quadratic when viewed as a function of the multiplier. This leads to an improved duality theory and to a related class of exact penalty functions. In addition, a relationship between Newton steps for the classical Lagrangian and the new Lagrangians is established.This work was supported in part by ARO Grant No. DAAG29-77-G-0125.  相似文献   

2.
This paper analyzes a constrained optimization algorithm that combines an unconstrained minimization scheme like the conjugate gradient method, an augmented Lagrangian, and multiplier updates to obtain global quadratic convergence. Some of the issues that we focus on are the treatment of rigid constraints that must be satisfied during the iterations and techniques for balancing the error associated with constraint violation with the error associated with optimality. A preconditioner is constructed with the property that the rigid constraints are satisfied while ill-conditioning due to penalty terms is alleviated. Various numerical linear algebra techniques required for the efficient implementation of the algorithm are presented, and convergence behavior is illustrated in a series of numerical experiments.This research was supported by the National Science Foundation Grant DMS-89-03226 and by the U.S. Army Research Office Contract DAA03-89-M-0314.We thank the referees for their many perceptive comments which led to substantial improvements in the presentation of this paper.  相似文献   

3.
We study the properties of the augmented Lagrangian function for nonlinear semidefinite programming. It is shown that, under a set of sufficient conditions, the augmented Lagrangian algorithm is locally convergent when the penalty parameter is larger than a certain threshold. An error estimate of the solution, depending on the penalty parameter, is also established.The first author was partially supported by Singapore-MIT Alliance and by the National University of Singapore under Grants RP314000-028/042/057-112. The second author was partially supported by the Funds of the Ministry of Education of China for PhD Units under Grant 20020141013 and the National Natural Science Foundation of China under Grant 10471015.  相似文献   

4.
Two dual methods for solving constrained optimization problems are presented: the Dual Active Set algorithm and an algorithm combining an unconstrained minimization scheme, an augmented Lagrangian and multiplier updates. A new preconditioner is introduced that has a significant impact on the speed of convergence.This research was supported by US Army Research Office Contract DAAL03-89-G-0082, and by National Science Foundation Grant DMS-9022899.  相似文献   

5.
Augmented Lagrangian function is one of the most important tools used in solving some constrained optimization problems. In this article, we study an augmented Lagrangian objective penalty function and a modified augmented Lagrangian objective penalty function for inequality constrained optimization problems. First, we prove the dual properties of the augmented Lagrangian objective penalty function, which are at least as good as the traditional Lagrangian function's. Under some conditions, the saddle point of the augmented Lagrangian objective penalty function satisfies the first-order Karush-Kuhn-Tucker condition. This is especially so when the Karush-Kuhn-Tucker condition holds for convex programming of its saddle point existence. Second, we prove the dual properties of the modified augmented Lagrangian objective penalty function. For a global optimal solution, when the exactness of the modified augmented Lagrangian objective penalty function holds, its saddle point exists. The sufficient and necessary stability conditions used to determine whether the modified augmented Lagrangian objective penalty function is exact for a global solution is proved. Based on the modified augmented Lagrangian objective penalty function, an algorithm is developed to find a global solution to an inequality constrained optimization problem, and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the modified augmented Lagrangian objective penalty function is proved for a local solution. An algorithm is presented in finding a local solution, with its convergence proved under some conditions.  相似文献   

6.
Nonlinear rescaling and proximal-like methods in convex optimization   总被引:4,自引:0,他引:4  
The nonlinear rescaling principle (NRP) consists of transforming the objective function and/or the constraints of a given constrained optimization problem into another problem which is equivalent to the original one in the sense that their optimal set of solutions coincides. A nonlinear transformation parameterized by a positive scalar parameter and based on a smooth sealing function is used to transform the constraints. The methods based on NRP consist of sequential unconstrained minimization of the classical Lagrangian for the equivalent problem, followed by an explicit formula updating the Lagrange multipliers. We first show that the NRP leads naturally to proximal methods with an entropy-like kernel, which is defined by the conjugate of the scaling function, and establish that the two methods are dually equivalent for convex constrained minimization problems. We then study the convergence properties of the nonlinear rescaling algorithm and the corresponding entropy-like proximal methods for convex constrained optimization problems. Special cases of the nonlinear rescaling algorithm are presented. In particular a new class of exponential penalty-modified barrier functions methods is introduced. Partially supported by the National Science Foundation, under Grants DMS-9201297, and DMS-9401871. Partially supported by NASA Grant NAG3-1397 and NSF Grant DMS-9403218.  相似文献   

7.
A proximal-based decomposition method for convex minimization problems   总被引:10,自引:0,他引:10  
This paper presents a decomposition method for solving convex minimization problems. At each iteration, the algorithm computes two proximal steps in the dual variables and one proximal step in the primal variables. We derive this algorithm from Rockafellar's proximal method of multipliers, which involves an augmented Lagrangian with an additional quadratic proximal term. The algorithm preserves the good features of the proximal method of multipliers, with the additional advantage that it leads to a decoupling of the constraints, and is thus suitable for parallel implementation. We allow for computing approximately the proximal minimization steps and we prove that under mild assumptions on the problem's data, the method is globally convergent and at a linear rate. The method is compared with alternating direction type methods and applied to the particular case of minimizing a convex function over a finite intersection of closed convex sets.Corresponding author. Partially supported by Air Force Office of Scientific Research Grant 91-0008 and National Science Foundation Grant DMS-9201297.  相似文献   

8.
Rockafellar's quadratic augmented Lagrangian for inequality constrained minimization is not twice differentiable. To eliminate this drawback, several quite complicated Lagrangians have been proposed. We exhibit a simple cubic Lagrangian that is twice differentiable. It stems from the recent work of Eckstein and Teboulle on Bregmanrelated Lagrangians.This research was supported by the State Committee for Scientific Research under Grant 8S50502206.  相似文献   

9.
This paper introduces an algorithm for convex minimization which includes quasi-Newton updates within a proximal point algorithm that depends on a preconditioned bundle subalgorithm. The method uses the Hessian of a certain outer function which depends on the Jacobian of a proximal point mapping which, in turn, depends on the preconditioner matrix and on a Lagrangian Hessian relative to a certain tangent space. Convergence is proved under boundedness assumptions on the preconditioner sequence. Research supported by NSF Grant No. DMS-9402018 and by Institut National de Recherche en Informatique et en Automatique, France.  相似文献   

10.
A new algorithm for solving nonconvex, equality-constrained optimization problems with separable structures is proposed in the present paper. A new augmented Lagrangian function is derived, and an iterative method is presented. The new proposed Lagrangian function preserves separability when the original problem is separable, and the property of linear convergence of the new algorithm is also presented. Unlike earlier algorithms for nonconvex decomposition, the convergence ratio for this method can be made arbitrarily small. Furthermore, it is feasible to extend this method to algorithms suited for inequality-constrained optimization problems. An example is included to illustrate the method.This research was supported in part by the National Science Foundation under NSF Grant No. ECS-85-06249.  相似文献   

11.
Mathematical Programming - In this article, we present new general results on existence of augmented Lagrange multipliers. We define a penalty function associated with an augmented Lagrangian, and...  相似文献   

12.
In this paper, the necessary optimality conditions for an unconstrained optimal control problem are used to derive a quasi-Newton method where the update involves only second-order derivative terms. A pointwise update which was presented in a previous paper by the authors is changed to allow for more general second-order sufficiency conditions in the control problem. In particular, pointwise versions of the Broyden, PSB, and SR1 update are considered. A convergence rate theorem is given for the Broyden and PSB versions.This research was supported by NSF Grant No. DMS-89-00410, by NSF Grant No. INT-88-00560, by AFOSR Grant No. AFOSR-89-0044, and by the Deutsche Forschungsgemeinschaft.  相似文献   

13.
In the second part of our study, we introduce the concept of global extended exactness of penalty and augmented Lagrangian functions, and derive the localization principle in the extended form. The main idea behind the extended exactness consists in an extension of the original constrained optimization problem by adding some extra variables, and then construction of a penalty/augmented Lagrangian function for the extended problem. This approach allows one to design extended penalty/augmented Lagrangian functions having some useful properties (such as smoothness), which their counterparts for the original problem might not possess. In turn, the global exactness of such extended merit functions can be easily proved with the use of the localization principle presented in this paper, which reduces the study of global exactness to a local analysis of a merit function based on sufficient optimality conditions and constraint qualifications. We utilize the localization principle in order to obtain simple necessary and sufficient conditions for the global exactness of the extended penalty function introduced by Huyer and Neumaier, and in order to construct a globally exact continuously differentiable augmented Lagrangian function for nonlinear semidefinite programming problems.  相似文献   

14.
In this paper, an approximate augmented Lagrangian function for nonlinear semidefinite programs is introduced. Some basic properties of the approximate augmented Lagrange function such as monotonicity and convexity are discussed. Necessary and sufficient conditions for approximate strong duality results are derived. Conditions for an approximate exact penalty representation in the framework of augmented Lagrangian are given. Under certain conditions, it is shown that any limit point of a sequence of stationary points of approximate augmented Lagrangian problems is a KKT point of the original semidefinite program and that a sequence of optimal solutions to augmented Lagrangian problems converges to a solution of the original semidefinite program.  相似文献   

15.
Given an augmented Lagrangian scheme for a general optimization problem, we use an epsilon subgradient step for improving the dual function. This can be seen as an update for an augmented penalty method, which is more stable because it does not force the penalty parameter to tend to infinity. We establish for this update primal-dual convergence for our augmented penalty method. As illustration, we apply our method to the test-bed kissing number problem.  相似文献   

16.
An augmented Lagrangian nonlinear programming algorithm has been developed. Its goals are to achieve robust global convergence and fast local convergence. Several unique strategies help the algorithm achieve these dual goals. The algorithm consists of three nested loops. The outer loop estimates the Kuhn-Tucker multipliers at a rapid linear rate of convergence. The middle loop minimizes the augmented Lagrangian functions for fixed multipliers. This loop uses the sequential quadratic programming technique with a box trust region stepsize restriction. The inner loop solves a single quadratic program. Slack variables and a constrained form of the fixed-multiplier middleloop problem work together with curved line searches in the inner-loop problem to allow large penalty wieghts for rapid outer-loop convergence. The inner-loop quadratic programs include quadratic onstraint terms, which complicate the inner loop, but speed the middle-loop progress when the constraint curvature is large.The new algorithm compares favorably with a commercial sequential quadratic programming algorithm on five low-order test problems. Its convergence is more robust, and its speed is not much slower.This research was supported in part by the National Aeronautics and Space Administration under Grant No. NAG-1-1009.  相似文献   

17.
An algorithm using column generation and penalty function techniques is presented. A linear program with a uniformly bounded number of columns, similar to the restricted master in generalized programming, is used to reduce the number of constraints included in forming a penalty function. The penalty function is used as a Lagrangian in an unconstrained subproblem.This work was supported in part by National Science Foundation Grant GS-3032.  相似文献   

18.
This paper contributes to the development of the field of augmented Lagrangian multiplier methods for general nonlinear programming by introducing a new update for the multipliers corresponding to inequality constraints. The update maintains naturally the nonnegativity of the multipliers without the need for a positive-orthant projection, as a result of the verification of the first-order necessary conditions for the minimization of a modified augmented Lagrangian penalty function.In the new multiplier method, the roles of the multipliers are interchanged: the multipliers corresponding to the inequality constraints are updated explicitly, whereas the multipliers corresponding to the equality constraints are approximated implicitly. It is shown that the basic properties of local convergence of the traditional multiplier method are valid also for the proposed method.  相似文献   

19.
A variable-penalty alternating directions method for convex optimization   总被引:6,自引:0,他引:6  
We study a generalized version of the method of alternating directions as applied to the minimization of the sum of two convex functions subject to linear constraints. The method consists of solving consecutively in each iteration two optimization problems which contain in the objective function both Lagrangian and proximal terms. The minimizers determine the new proximal terms and a simple update of the Lagrangian terms follows. We prove a convergence theorem which extends existing results by relaxing the assumption of uniqueness of minimizers. Another novelty is that we allow penalty matrices, and these may vary per iteration. This can be beneficial in applications, since it allows additional tuning of the method to the problem and can lead to faster convergence relative to fixed penalties. As an application, we derive a decomposition scheme for block angular optimization and present computational results on a class of dual block angular problems. This material is based on research supported by the Air Force Office of Scientific Research Grant AFOSR-89-0410 and by NSF Grants CCR-8907671, CDA-9024618 and CCR-9306807.  相似文献   

20.
We consider the global and local convergence properties of a class of Lagrangian barrier methods for solving nonlinear programming problems. In such methods, simple bound constraints may be treated separately from more general constraints. The objective and general constraint functions are combined in a Lagrangian barrier function. A sequence of such functions are approximately minimized within the domain defined by the simple bounds. Global convergence of the sequence of generated iterates to a first-order stationary point for the original problem is established. Furthermore, possible numerical difficulties associated with barrier function methods are avoided as it is shown that a potentially troublesome penalty parameter is bounded away from zero. This paper is a companion to previous work of ours on augmented Lagrangian methods.

  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号