首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
提出了一个处理等式约束优化问题新的SQP算法,该算法通过求解一个增广Lagrange函数的拟Newton方法推导出一个等式约束二次规划子问题,从而获得下降方向.罚因子具有自动调节性,并能避免趋于无穷.为克服Maratos效应采用增广Lagrange函数作为效益函数并结合二阶步校正方法.在适当的条件下,证明算法是全局收敛的,并且具有超线性收敛速度.  相似文献   

2.
Augmented Lagrangian function is one of the most important tools used in solving some constrained optimization problems. In this article, we study an augmented Lagrangian objective penalty function and a modified augmented Lagrangian objective penalty function for inequality constrained optimization problems. First, we prove the dual properties of the augmented Lagrangian objective penalty function, which are at least as good as the traditional Lagrangian function's. Under some conditions, the saddle point of the augmented Lagrangian objective penalty function satisfies the first-order Karush-Kuhn-Tucker condition. This is especially so when the Karush-Kuhn-Tucker condition holds for convex programming of its saddle point existence. Second, we prove the dual properties of the modified augmented Lagrangian objective penalty function. For a global optimal solution, when the exactness of the modified augmented Lagrangian objective penalty function holds, its saddle point exists. The sufficient and necessary stability conditions used to determine whether the modified augmented Lagrangian objective penalty function is exact for a global solution is proved. Based on the modified augmented Lagrangian objective penalty function, an algorithm is developed to find a global solution to an inequality constrained optimization problem, and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the modified augmented Lagrangian objective penalty function is proved for a local solution. An algorithm is presented in finding a local solution, with its convergence proved under some conditions.  相似文献   

3.
Among the penalty based approaches for constrained optimization, augmented Lagrangian (AL) methods are better in at least three ways: (i) they have theoretical convergence properties, (ii) they distort the original objective function minimally, thereby providing a better function landscape for search, and (iii) they can result in computing optimal Lagrange multiplier for each constraint as a by-product. Instead of keeping a constant penalty parameter throughout the optimization process, these algorithms update the parameters (called multipliers) adaptively so that the corresponding penalized function dynamically changes its optimum from the unconstrained minimum point to the constrained minimum point with iterations. However, the flip side of these algorithms is that the overall algorithm requires a serial application of a number of unconstrained optimization tasks, a process that is usually time-consuming and tend to be computationally expensive. In this paper, we devise a genetic algorithm based parameter update strategy to a particular AL method. The proposed strategy updates critical parameters in an adaptive manner based on population statistics. Occasionally, a classical optimization method is used to improve the GA-obtained solution, thereby providing the resulting hybrid procedure its theoretical convergence property. The GAAL method is applied to a number of constrained test problems taken from the evolutionary algorithms (EAs) literature. The number of function evaluations required by GAAL in most problems is found to be smaller than that needed by a number of existing evolutionary based constraint handling methods. GAAL method is found to be accurate, computationally fast, and reliable over multiple runs. Besides solving the problems, the proposed GAAL method is also able to find the optimal Lagrange multiplier associated with each constraint for the test problems as an added benefit??a matter that is important for a sensitivity analysis of the obtained optimized solution, but has not yet been paid adequate attention in the past evolutionary constrained optimization studies.  相似文献   

4.
In this two-part study, we develop a unified approach to the analysis of the global exactness of various penalty and augmented Lagrangian functions for constrained optimization problems in finite-dimensional spaces. This approach allows one to verify in a simple and straightforward manner whether a given penalty/augmented Lagrangian function is exact, i.e., whether the problem of unconstrained minimization of this function is equivalent (in some sense) to the original constrained problem, provided the penalty parameter is sufficiently large. Our approach is based on the so-called localization principle that reduces the study of global exactness to a local analysis of a chosen merit function near globally optimal solutions. In turn, such local analysis can be performed with the use of optimality conditions and constraint qualifications. In the first paper, we introduce the concept of global parametric exactness and derive the localization principle in the parametric form. With the use of this version of the localization principle, we recover existing simple, necessary, and sufficient conditions for the global exactness of linear penalty functions and for the existence of augmented Lagrange multipliers of Rockafellar–Wets’ augmented Lagrangian. We also present completely new necessary and sufficient conditions for the global exactness of general nonlinear penalty functions and for the global exactness of a continuously differentiable penalty function for nonlinear second-order cone programming problems. We briefly discuss how one can construct a continuously differentiable exact penalty function for nonlinear semidefinite programming problems as well.  相似文献   

5.
In the second part of our study, we introduce the concept of global extended exactness of penalty and augmented Lagrangian functions, and derive the localization principle in the extended form. The main idea behind the extended exactness consists in an extension of the original constrained optimization problem by adding some extra variables, and then construction of a penalty/augmented Lagrangian function for the extended problem. This approach allows one to design extended penalty/augmented Lagrangian functions having some useful properties (such as smoothness), which their counterparts for the original problem might not possess. In turn, the global exactness of such extended merit functions can be easily proved with the use of the localization principle presented in this paper, which reduces the study of global exactness to a local analysis of a merit function based on sufficient optimality conditions and constraint qualifications. We utilize the localization principle in order to obtain simple necessary and sufficient conditions for the global exactness of the extended penalty function introduced by Huyer and Neumaier, and in order to construct a globally exact continuously differentiable augmented Lagrangian function for nonlinear semidefinite programming problems.  相似文献   

6.
In this paper, we design a numerical algorithm for solving a simple bilevel program where the lower level program is a nonconvex minimization problem with a convex set constraint. We propose to solve a combined problem where the first order condition and the value function are both present in the constraints. Since the value function is in general nonsmooth, the combined problem is in general a nonsmooth and nonconvex optimization problem. We propose a smoothing augmented Lagrangian method for solving a general class of nonsmooth and nonconvex constrained optimization problems. We show that, if the sequence of penalty parameters is bounded, then any accumulation point is a Karush-Kuch-Tucker (KKT) point of the nonsmooth optimization problem. The smoothing augmented Lagrangian method is used to solve the combined problem. Numerical experiments show that the algorithm is efficient for solving the simple bilevel program.  相似文献   

7.
Optimality (or KKT) systems arise as primal-dual stationarity conditions for constrained optimization problems. Under suitable constraint qualifications, local minimizers satisfy KKT equations but, unfortunately, many other stationary points (including, perhaps, maximizers) may solve these nonlinear systems too. For this reason, nonlinear-programming solvers make strong use of the minimization structure and the naive use of nonlinear-system solvers in optimization may lead to spurious solutions. Nevertheless, in the basin of attraction of a minimizer, nonlinear-system solvers may be quite efficient. In this paper quasi-Newton methods for solving nonlinear systems are used as accelerators of nonlinear-programming (augmented Lagrangian) algorithms, with equality constraints. A periodically-restarted memoryless symmetric rank-one (SR1) correction method is introduced for that purpose. Convergence results are given and numerical experiments that confirm that the acceleration is effective are presented. This work was supported by FAPESP, CNPq, PRONEX-Optimization (CNPq / FAPERJ), FAEPEX, UNICAMP.  相似文献   

8.
Based on the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chiefly but not necessarily convex programs) with a particular structure. The algorithm effectively combines an alternating direction technique with a nonmonotone line search to minimize the augmented Lagrangian function at each iteration. We establish convergence for this algorithm, and apply it to solving problems in image reconstruction with total variation regularization. We present numerical results showing that the resulting solver, called TVAL3, is competitive with, and often outperforms, other state-of-the-art solvers in the field.  相似文献   

9.
We provide a unifying geometric framework for the analysis of general classes of duality schemes and penalty methods for nonconvex constrained optimization problems. We present a separation result for nonconvex sets via general concave surfaces. We use this separation result to provide necessary and sufficient conditions for establishing strong duality between geometric primal and dual problems. Using the primal function of a constrained optimization problem, we apply our results both in the analysis of duality schemes constructed using augmented Lagrangian functions, and in establishing necessary and sufficient conditions for the convergence of penalty methods.  相似文献   

10.
We consider the task of design optimization where the constraint is a state equation that can only be solved by a typically rather slowly converging fixed point solver. This process can be augmented by a corresponding adjoint solver and based on the resulting approximate reduced derivatives also an optimization iteration which actually changes the design. To coordinate the three iterative processes, we use an exact penalty function of doubly augmented Lagrangian type. The main issue here is how to derive a design space preconditioner for the approximated reduced gradient which ensures a consistent reduction of the employed penalty function as well as significant design corrections. Some numerical experiments for an alternating approach where any combination and sequencing of steps are used to improve feasibility and optimality done on a variant of the Bratu problem are presented.  相似文献   

11.
<正>Analysis on a Superlinearly Convergent Augmented Lagrangian Method Ya Xiang YUAN Abstract The augmented Lagrangian method is a classical method for solving constrained optimization.Recently,the augmented Lagrangian method attracts much attention due to its applications to sparse optimization in compressive sensing and low rank matrix optimization problems.However,most Lagrangian methods use first order information to update the Lagrange multipliers,which lead to only linear convergence.In this paper,we study an update technique  相似文献   

12.
Numerical methods for solving constrained optimization problems need to incorporate the constraints in a manner that satisfies essentially competing interests; the incorporation needs to be simple enough that the solution method is tractable, yet complex enough to ensure the validity of the ultimate solution. We introduce a framework for constraint incorporation that identifies a minimal acceptable level of complexity and defines two basic types of constraint incorporation which (with combinations) cover nearly all popular numerical methods for constrained optimization, including trust region methods, penalty methods, barrier methods, penalty-multiplier methods, and sequential quadratic programming methods. The broad application of our framework relies on addition and chain rules for constraint incorporation which we develop here.  相似文献   

13.
Log-Sigmoid Multipliers Method in Constrained Optimization   总被引:10,自引:0,他引:10  
In this paper we introduced and analyzed the Log-Sigmoid (LS) multipliers method for constrained optimization. The LS method is to the recently developed smoothing technique as augmented Lagrangian to the penalty method or modified barrier to classical barrier methods. At the same time the LS method has some specific properties, which make it substantially different from other nonquadratic augmented Lagrangian techniques.We established convergence of the LS type penalty method under very mild assumptions on the input data and estimated the rate of convergence of the LS multipliers method under the standard second order optimality condition for both exact and nonexact minimization.Some important properties of the dual function and the dual problem, which are based on the LS Lagrangian, were discovered and the primal–dual LS method was introduced.  相似文献   

14.
In 1988, Tapia (Ref. 1) developed and analyzed SQP secant methods in equality constrained optimization taking explicitly the additive structure of the problem setting into account. In this paper, we extend Tapia's augmented scale Lagrangian secant method to the case where additional structure coming from the objective function is available. Using the example of nonlinear least squares with equality constraints, we demonstrate these ideas and develop a convergence theory proving local and q-superlinear convergence for this kind of structured SQP-algorithms.This research was supported by the Studienstiftung des Deutschen Volkes.  相似文献   

15.
Based on an augmented Lagrangian line search function, a sequential quadratically constrained quadratic programming method is proposed for solving nonlinearly constrained optimization problems. Compared to quadratic programming solved in the traditional SQP methods, a convex quadratically constrained quadratic programming is solved here to obtain a search direction, and the Maratos effect does not occur without any other corrections. The “active set” strategy used in this subproblem can avoid recalculating the unnecessary gradients and (approximate) Hessian matrices of the constraints. Under certain assumptions, the proposed method is proved to be globally, superlinearly, and quadratically convergent. As an extension, general problems with inequality and equality constraints as well as nonmonotone line search are also considered.  相似文献   

16.
In this paper, the augmented Lagrangian SQP method is considered for the numerical solution of optimization problems with equality constraints. The problem is formulated in a Hilbert space setting. Since the augmented Lagrangian SQP method is a type of Newton method for the nonlinear system of necessary optimality conditions, it is conceivable that q-quadratic convergence can be shown to hold locally in the pair (x, ). Our interest lies in the convergence of the variable x alone. We improve convergence estimates for the Newton multiplier update which does not satisfy the same convergence properties in x as for example the least-square multiplier update. We discuss these updates in the context of parameter identification problems. Furthermore, we extend the convergence results to inexact augmented Lagrangian methods. Numerical results for a control problem are also presented.  相似文献   

17.
The augmented Lagrangian method is a classical method for solving constrained optimization.Recently,the augmented Lagrangian method attracts much attention due to its applications to sparse optimization in compressive sensing and low rank matrix optimization problems.However,most Lagrangian methods use first order information to update the Lagrange multipliers,which lead to only linear convergence.In this paper,we study an update technique based on second order information and prove that superlinear convergence can be obtained.Theoretical properties of the update formula are given and some implementation issues regarding the new update are also discussed.  相似文献   

18.
The spectral projected gradient method SPG is an algorithm for large-scale bound-constrained optimization introduced recently by Birgin, Martínez, and Raydan. It is based on the Raydan unconstrained generalization of the Barzilai-Borwein method for quadratics. The SPG algorithm turned out to be surprisingly effective for solving many large-scale minimization problems with box constraints. Therefore, it is natural to test its perfomance for solving the sub-problems that appear in nonlinear programming methods based on augmented Lagrangians. In this work, augmented Lagrangian methods which use SPG as the underlying convex-constraint solver are introduced (ALSPG) and the methods are tested in two sets of problems. First, a meaningful subset of large-scale nonlinearly constrained problems of the CUTE collection is solved and compared with the perfomance of LANCELOT. Second, a family of location problems in the minimax formulation is solved against the package FFSQP.  相似文献   

19.
A trajectory-based method for solving constrained nonlinear optimization problems is proposed. The method is an extension of a trajectory-based method for unconstrained optimization. The optimization problem is transformed into a system of second-order differential equations with the aid of the augmented Lagrangian. Several novel contributions are made, including a new penalty parameter updating strategy, an adaptive step size routine for numerical integration and a scaling mechanism. A new criterion is suggested for the adjustment of the penalty parameter. Global convergence properties of the method are established.  相似文献   

20.
In this paper, in order to obtain some existence results about solutions of the augmented Lagrangian problem for a constrained problem in which the objective function and constraint functions are noncoercive, we construct a new augmented Lagrangian function by using an auxiliary function. We establish a zero duality gap result and a sufficient condition of an exact penalization representation for the constrained problem without the coercive or level-bounded assumption on the objective function and constraint functions. By assuming that the sequence of multipliers is bounded, we obtain the existence of a global minimum and an asymptotically minimizing sequence for the constrained optimization problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号