首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Stabilized SQP revisited   总被引:1,自引:0,他引:1  
The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve superlinear convergence in situations when the Lagrange multipliers associated to a solution are not unique. Within the framework of Fischer (Math Program 94:91–124, 2002), the key to local superlinear convergence of sSQP are the following two properties: upper Lipschitzian behavior of solutions of the Karush-Kuhn-Tucker (KKT) system under canonical perturbations and local solvability of sSQP subproblems with the associated primal-dual step being of the order of the distance from the current iterate to the solution set of the unperturbed KKT system. According to Fernández and Solodov (Math Program 125:47–73, 2010), both of these properties are ensured by the second-order sufficient optimality condition (SOSC) without any constraint qualification assumptions. In this paper, we state precise relationships between the upper Lipschitzian property of solutions of KKT systems, error bounds for KKT systems, the notion of critical Lagrange multipliers (a subclass of multipliers that violate SOSC in a very special way), the second-order necessary condition for optimality, and solvability of sSQP subproblems. Moreover, for the problem with equality constraints only, we prove superlinear convergence of sSQP under the assumption that the dual starting point is close to a noncritical multiplier. Since noncritical multipliers include all those satisfying SOSC but are not limited to them, we believe this gives the first superlinear convergence result for any Newtonian method for constrained optimization under assumptions that do not include any constraint qualifications and are weaker than SOSC. In the general case when inequality constraints are present, we show that such a relaxation of assumptions is not possible. We also consider applying sSQP to the problem where inequality constraints are reformulated into equalities using slack variables, and discuss the assumptions needed for convergence in this approach. We conclude with consequences for local regularization methods proposed in (Izmailov and Solodov SIAM J Optim 16:210–228, 2004; Wright SIAM J. Optim. 15:673–676, 2005). In particular, we show that these methods are still locally superlinearly convergent under the noncritical multiplier assumption, weaker than SOSC employed originally.  相似文献   

2.
We discuss possible scenarios of behaviour of the dual part of sequences generated by primal-dual Newton-type methods when applied to optimization problems with nonunique multipliers associated to a solution. Those scenarios are: (a) failure of convergence of the dual sequence; (b) convergence to a so-called critical multiplier (which, in particular, violates some second-order sufficient conditions for optimality), the latter appearing to be a typical scenario when critical multipliers exist; (c) convergence to a noncritical multiplier. The case of mathematical programs with complementarity constraints is also discussed. We illustrate those scenarios with examples, and discuss consequences for the speed of convergence. We also put together a collection of examples of optimization problems with constraints violating some standard constraint qualifications, intended for preliminary testing of existing algorithms on degenerate problems, or for developing special new algorithms designed to deal with constraints degeneracy. Research of the first author is supported by the Russian Foundation for Basic Research Grants 07-01-00270, 07-01-00416 and 07-01-90102-Mong, and by RF President’s Grant NS-9344.2006.1 for the support of leading scientific schools. The second author is supported in part by CNPq Grants 301508/2005-4, 490200/2005-2 and 550317/2005-8, by PRONEX–Optimization, and by FAPERJ Grant E-26/151.942/2004.  相似文献   

3.
Nonlinear rescaling vs. smoothing technique in convex optimization   总被引:1,自引:0,他引:1  
We introduce an alternative to the smoothing technique approach for constrained optimization. As it turns out for any given smoothing function there exists a modification with particular properties. We use the modification for Nonlinear Rescaling (NR) the constraints of a given constrained optimization problem into an equivalent set of constraints.?The constraints transformation is scaled by a vector of positive parameters. The Lagrangian for the equivalent problems is to the correspondent Smoothing Penalty functions as Augmented Lagrangian to the Classical Penalty function or MBFs to the Barrier Functions. Moreover the Lagrangians for the equivalent problems combine the best properties of Quadratic and Nonquadratic Augmented Lagrangians and at the same time are free from their main drawbacks.?Sequential unconstrained minimization of the Lagrangian for the equivalent problem in primal space followed by both Lagrange multipliers and scaling parameters update leads to a new class of NR multipliers methods, which are equivalent to the Interior Quadratic Prox methods for the dual problem.?We proved convergence and estimate the rate of convergence of the NR multipliers method under very mild assumptions on the input data. We also estimate the rate of convergence under various assumptions on the input data.?In particular, under the standard second order optimality conditions the NR method converges with Q-linear rate without unbounded increase of the scaling parameters, which correspond to the active constraints.?We also established global quadratic convergence of the NR methods for Linear Programming with unique dual solution.?We provide numerical results, which strongly support the theory. Received: September 2000 / Accepted: October 2001?Published online April 12, 2002  相似文献   

4.
It has been previously demonstrated that in the case when a Lagrange multiplier associated to a given solution is not unique, Newton iterations [e.g., those of sequential quadratic programming (SQP)] have a tendency to converge to special multipliers, called critical multipliers (when such critical multipliers exist). This fact is of importance because critical multipliers violate the second-order sufficient optimality conditions, and this was shown to be the reason for slow convergence typically observed for problems with degenerate constraints (convergence to noncritical multipliers results in superlinear rate despite degeneracy). Some theoretical and numerical validation of this phenomenon can be found in Izmailov and Solodov (Comput Optim Appl 42:231–264, 2009; Math Program 117:271–304, 2009). However, previous studies concerned the basic forms of Newton iterations. The question remained whether the attraction phenomenon still persists for relevant modifications, as well as in professional implementations. In this paper, we answer this question in the affirmative by presenting numerical results for the well known MINOS and SNOPT software packages applied to a collection of degenerate problems. We also extend previous theoretical considerations to the linearly constrained Lagrangian methods and to the quasi-Newton SQP, on which MINOS and SNOPT are based. Experiments also show that in the stabilized version of SQP the attraction phenomenon still exists but appears less persistent.  相似文献   

5.
We present alternative methods for verifying the quality of a proposed solution to a two stage stochastic program with recourse. Our methods revolve around implications of a dual problem in which dual multipliers on the nonanticipativity constraints play a critical role. Using randomly sampled observations of the stochastic elements, we introduce notions of statistical dual feasibility and sampled error bounds. Additionally, we use the nonanticipativity multipliers to develop connections to reduced gradient methods. Finally, we propose a statistical test based on directional derivatives. We illustrate the applicability of these tests via some examples. This work was supported in part by Grant No. NSF-DMI-9414680 from the National Science Foundation  相似文献   

6.
We present a primal-dual row-action method for the minimization of a convex function subject to general convex constraints. Constraints are used one at a time, no changes are made in the constraint functions and their Jacobian matrix (thus, the row-action nature of the algorithm), and at each iteration a subproblem is solved consisting of minimization of the objective function subject to one or two linear equations. The algorithm generates two sequences: one of them, called primal, converges to the solution of the problem; the other one, called dual, approximates a vector of optimal KKT multipliers for the problem. We prove convergence of the primal sequence for general convex constraints. In the case of linear constraints, we prove that the primal sequence converges at least linearly and obtain as a consequence the convergence of the dual sequence.The research of the first author was partially supported by CNPq Grant No. 301280/86.  相似文献   

7.
The equilibrium strategy for $N$-person differential games can be obtained from a min-max problem subject to differential constraints. The differential constraints are treated here by the duality and penalty methods. We first formulate the duality theory. This involves the introduction of $N+1$ Lagrange multipliers: one for each player and one commonly shared by all players. The primal min-max problem thus results in a dual problem, which is a max-min problem with no differential constraints. We develop the penalty theory by penalizing $N+1$ differential constraints. We give a convergence proof which generalizes a theorem due to B.T. Polyak.  相似文献   

8.
Lagrangian relaxation is a popular technique to solve difficult optimization problems. However, the applicability of this technique depends on having a relatively low number of hard constraints to dualize. When there are many hard constraints, it may be preferable to relax them dynamically, according to some rule depending on which multipliers are active. From the dual point of view, this approach yields multipliers with varying dimensions and a dual objective function that changes along iterations. We discuss how to apply a bundle methodology to solve this kind of dual problems. Our framework covers many separation procedures to generate inequalities that can be found in the literature, including (but not limited to) the most violated inequality. We analyze the resulting dynamic bundle method giving a positive answer for its primal-dual convergence properties, and, under suitable conditions, show finite termination for polyhedral problems. Claudia Sagastizábal is on leave from INRIA Rocquencourt, France. Research supported by CNPq Grant No.303540-03/6.  相似文献   

9.
The paper presents a new approach to solving nonlinear programming (NLP) problems for which the strict complementarity condition (SCC), a constraint qualification (CQ), and a second-order sufficient condition (SOSC) for optimality are not necessarily satisfied at a solution. Our approach is based on the construction of p-regularity and on reformulating the inequality constraints as equalities. Namely, by introducing the slack variables, we get the equality constrained problem, for which the Lagrange optimality system is singular at the solution of the NLP problem in the case of the violation of the CQs, SCC and/or SOSC. To overcome the difficulty of singularity, we propose the p-factor method for solving the Lagrange system. The method has a superlinear rate of convergence under a mild assumption. We show that our assumption is always satisfied under a standard second-order sufficient condition (SOSC) for optimality. At the same time, we give examples of the problems where the SOSC does not hold, but our assumption is satisfied. Moreover, no estimation of the set of active constraints is required. The proposed approach can be applied to a variety of problems.  相似文献   

10.
The attraction of dual trajectories of Newton’s method for the Lagrange system to critical Lagrange multipliers is analyzed. This stable effect, which has been confirmed by numerical practice, leads to the Newton-Lagrange method losing its superlinear convergence when applied to problems with irregular constraints. At the same time, available theoretical results are of “negative” character; i.e., they show that convergence to a noncritical multiplier is not possible or unlikely. In the case of a purely quadratic problem with a single constraint, a “positive” result is proved for the first time demonstrating that the critical multipliers are attractors for the dual trajectories. Additionally, the influence exerted by the attraction to critical multipliers on the convergence rate of direct and dual trajectories is characterized.  相似文献   

11.
Log-Sigmoid Multipliers Method in Constrained Optimization   总被引:10,自引:0,他引:10  
In this paper we introduced and analyzed the Log-Sigmoid (LS) multipliers method for constrained optimization. The LS method is to the recently developed smoothing technique as augmented Lagrangian to the penalty method or modified barrier to classical barrier methods. At the same time the LS method has some specific properties, which make it substantially different from other nonquadratic augmented Lagrangian techniques.We established convergence of the LS type penalty method under very mild assumptions on the input data and estimated the rate of convergence of the LS multipliers method under the standard second order optimality condition for both exact and nonexact minimization.Some important properties of the dual function and the dual problem, which are based on the LS Lagrangian, were discovered and the primal–dual LS method was introduced.  相似文献   

12.
In the present paper, we propose a novel convergence analysis of the alternating direction method of multipliers, based on its equivalence with the overrelaxed primal–dual hybrid gradient algorithm. We consider the smooth case, where the objective function can be decomposed into one differentiable with Lipschitz continuous gradient part and one strongly convex part. Under these hypotheses, a convergence proof with an optimal parameter choice is given for the primal–dual method, which leads to convergence results for the alternating direction method of multipliers. An accelerated variant of the latter, based on a parameter relaxation, is also proposed, which is shown to converge linearly with same asymptotic rate as the primal–dual algorithm.  相似文献   

13.
The multiplier method of Hestenes and Powell applied to convex programming   总被引:1,自引:0,他引:1  
For nonlinear programming problems with equality constraints, Hestenes and Powell have independently proposed a dual method of solution in which squares of the constraint functions are added as penalties to the Lagrangian, and a certain simple rule is used for updating the Lagrange multipliers after each cycle. Powell has essentially shown that the rate of convergence is linear if one starts with a sufficiently high penalty factor and sufficiently near to a local solution satisfying the usual second-order sufficient conditions for optimality. This paper furnishes the corresponding method for inequality-constrained problems. Global convergence to an optimal solution is established in the convex case for an arbitrary penalty factor and without the requirement that an exact minimum be calculated at each cycle. Furthermore, the Lagrange multipliers are shown to converge, even though the optimal multipliers may not be unique.This work was supported in part by the Air Force Office of Scientific Research under Grant No. AF-AFOSR-72-2269.  相似文献   

14.
In this paper, under the existence of a certificate of nonnegativity of the objective function over the given constraint set, we present saddle-point global optimality conditions and a generalized Lagrangian duality theorem for (not necessarily convex) polynomial optimization problems, where the Lagrange multipliers are polynomials. We show that the nonnegativity certificate together with the archimedean condition guarantees that the values of the Lasserre hierarchy of semidefinite programming (SDP) relaxations of the primal polynomial problem converge asymptotically to the common primal–dual value. We then show that the known regularity conditions that guarantee finite convergence of the Lasserre hierarchy also ensure that the nonnegativity certificate holds and the values of the SDP relaxations converge finitely to the common primal–dual value. Finally, we provide classes of nonconvex polynomial optimization problems for which the Slater condition guarantees the required nonnegativity certificate and the common primal–dual value with constant multipliers and the dual problems can be reformulated as semidefinite programs. These classes include some separable polynomial programs and quadratic optimization problems with quadratic constraints that admit certain hidden convexity. We also give several numerical examples that illustrate our results.  相似文献   

15.
A Modified Barrier-Augmented Lagrangian Method for Constrained Minimization   总被引:4,自引:0,他引:4  
We present and analyze an interior-exterior augmented Lagrangian method for solving constrained optimization problems with both inequality and equality constraints. This method, the modified barrier—augmented Lagrangian (MBAL) method, is a combination of the modified barrier and the augmented Lagrangian methods. It is based on the MBAL function, which treats inequality constraints with a modified barrier term and equalities with an augmented Lagrangian term. The MBAL method alternatively minimizes the MBAL function in the primal space and updates the Lagrange multipliers. For a large enough fixed barrier-penalty parameter the MBAL method is shown to converge Q-linearly under the standard second-order optimality conditions. Q-superlinear convergence can be achieved by increasing the barrier-penalty parameter after each Lagrange multiplier update. We consider a dual problem that is based on the MBAL function. We prove a basic duality theorem for it and show that it has several important properties that fail to hold for the dual based on the classical Lagrangian.  相似文献   

16.
We consider a dual method for solving non-strictly convex programs possessing a certain separable structure. This method may be viewed as a dual version of a block coordinate ascent method studied by Auslender [1, Section 6]. We show that the decomposition methods of Han [6, 7] and the method of multipliers may be viewed as special cases of this method. We also prove a convergence result for this method which can be applied to sharpen the available convergence results for Han's methods.The main part of this research was conducted while the author was with the Laboratory for Information and Decision Systems, M.I.T., Cambridge, with support by the U.S. Army Research Office, Contract No. DAAL03-86-K-0171 (Center for Intelligent Control Systems) and by the National Science Foundation under Grant ECS-8519058.  相似文献   

17.
We consider a class of optimization problems with switch-off/switch-on constraints, which is a relatively new problem model. The specificity of this model is that it contains constraints that are being imposed (switched on) at some points of the feasible region, while being disregarded (switched off) at other points. This seems to be a potentially useful modeling paradigm, that has been shown to be helpful, for example, in optimal topology design. The fact that some constraints “vanish” from the problem at certain points, gave rise to the name of mathematical programs with vanishing constraints (MPVC). It turns out that such problems are usually degenerate at a solution, but are structurally different from the related class of mathematical programs with complementarity constraints (MPCC). In this paper, we first discuss some known first- and second-order necessary optimality conditions for MPVC, giving new very short and direct justifications. We then derive some new special second-order sufficient optimality conditions for these problems and show that, quite remarkably, these conditions are actually equivalent to the classical/standard second-order sufficient conditions in optimization. We also provide a sensitivity analysis for MPVC. Finally, a relaxation method is proposed. For this method, we analyze constraints regularity and boundedness of the Lagrange multipliers in the relaxed subproblems, derive a sufficient condition for local uniqueness of solutions of subproblems, and give convergence estimates. Research of the first author was supported by the Russian Foundation for Basic Research Grants 07-01-00270, 07-01-00416 and 07-01-90102-Mong, and by RF President’s Grant NS-9344.2006.1 for the support of leading scientific schools. The second author was supported in part by CNPq Grants 301508/2005-4, 490200/2005-2 and 550317/2005-8, by PRONEX-Optimization, and by FAPERJ.  相似文献   

18.
Algorithms for convex programming, based on penalty methods, can be designed to have good primal convergence properties even without uniqueness of optimal solutions. Taking primal convergence for granted, in this paper we investigate the asymptotic behavior of an appropriate dual sequence obtained directly from primal iterates. First, under mild hypotheses, which include the standard Slater condition but neither strict complementarity nor second-order conditions, we show that this dual sequence is bounded and also, each cluster point belongs to the set of Karush–Kuhn–Tucker multipliers. Then we identify a general condition on the behavior of the generated primal objective values that ensures the full convergence of the dual sequence to a specific multiplier. This dual limit depends only on the particular penalty scheme used by the algorithm. Finally, we apply this approach to prove the first general dual convergence result of this kind for penalty-proximal algorithms in a nonlinear setting.  相似文献   

19.
《Optimization》2012,61(5-6):495-516
For optimization problems that are structured both with respect to the constraints and with respect to the variables, it is possible to use primal–dual solution approaches, based on decomposition principles. One can construct a primal subproblem, by fixing some variables, and a dual subproblem, by relaxing some constraints and king their Lagrange multipliers, so that both these problems are much easier to solve than the original problem. We study methods based on these subproblems, that do not include the difficult Benders or Dantzig-Wolfe master problems, namely primal–dual subgradient optimization methods, mean value cross decomposition, and several comtbinations of the different techniques. In this paper, these solution approaches are applied to the well-known uncapacitated facility location problem. Computational tests show that some combination methods yield near-optimal solutions quicker than the classical dual ascent method of Erlenkotter  相似文献   

20.
This paper deals with a new algorithm for a 0-1 bidimensional knapsack Lagrangean dual which relaxes one of the two constraints. Classical iterative algorithms generate a sequence of multipliers which converges to an optimal one. In this way, these methods generate a sequence of 0-1 one-dimensional knapsack instances. Generally, the procedure for solving each instance is considered as a black box. We propose to design a new iterative scheme in which the computation of the step size takes into account the algorithmic efficiency of each instance. Our adapted step size iterative algorithm is compared favorably with several other algorithms for the 0-1 biknapsack Lagrangean dual over difficult instances for CPLEX 7.0.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号