首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Usual global convergence results for sequential quadratic programming (SQP) algorithms with linesearch rely on some a priori assumptions about the generated sequences, such as boundedness of the primal sequence and/or of the dual sequence and/or of the sequence of values of a penalty function used in the linesearch procedure. Different convergence statements use different combinations of assumptions, but they all assume boundedness of at least one of the sequences mentioned above. In the given context boundedness assumptions are particularly undesirable, because even for non-pathological and well-behaved problems the associated penalty functions (whose descent is used to produce primal iterates) may not be bounded below for any value of the penalty parameter. Consequently, boundedness assumptions on the iterates are not easily justifiable. By introducing a very simple and computationally cheap safeguard in the linesearch procedure, we prove boundedness of the primal sequence in the case when the feasible set is nonempty, convex, and bounded. If, in addition, the Slater condition holds, we obtain a complete global convergence result without any a priori assumptions on the iterative sequences. The safeguard consists of not accepting a further increase of constraints violation at iterates which are infeasible beyond a chosen threshold, which can always be ensured by the proposed modified SQP linesearch criterion. The author is supported in part by CNPq Grants 301508/2005-4, 490200/2005-2, 550317/2005-8, by PRONEX–Optimization, and by FAPERJ Grant E-26/151.942/2004.  相似文献   

2.
We consider the class of quadratically-constrained quadratic-programming methods in the framework extended from optimization to more general variational problems. Previously, in the optimization case, Anitescu (SIAM J. Optim. 12, 949–978, 2002) showed superlinear convergence of the primal sequence under the Mangasarian-Fromovitz constraint qualification and the quadratic growth condition. Quadratic convergence of the primal-dual sequence was established by Fukushima, Luo and Tseng (SIAM J. Optim. 13, 1098–1119, 2003) under the assumption of convexity, the Slater constraint qualification, and a strong second-order sufficient condition. We obtain a new local convergence result, which complements the above (it is neither stronger nor weaker): we prove primal-dual quadratic convergence under the linear independence constraint qualification, strict complementarity, and a second-order sufficiency condition. Additionally, our results apply to variational problems beyond the optimization case. Finally, we provide a necessary and sufficient condition for superlinear convergence of the primal sequence under a Dennis-Moré type condition. Research of the second author is partially supported by CNPq Grants 300734/95-6 and 471780/2003-0, by PRONEX–Optimization, and by FAPERJ.  相似文献   

3.
We provide a unifying geometric framework for the analysis of general classes of duality schemes and penalty methods for nonconvex constrained optimization problems. We present a separation result for nonconvex sets via general concave surfaces. We use this separation result to provide necessary and sufficient conditions for establishing strong duality between geometric primal and dual problems. Using the primal function of a constrained optimization problem, we apply our results both in the analysis of duality schemes constructed using augmented Lagrangian functions, and in establishing necessary and sufficient conditions for the convergence of penalty methods.  相似文献   

4.
We apply a modified subgradient algorithm (MSG) for solving the dual of a nonlinear and nonconvex optimization problem. The dual scheme we consider uses the sharp augmented Lagrangian. A desirable feature of this method is primal convergence, which means that every accumulation point of a primal sequence (which is automatically generated during the process), is a primal solution. This feature is not true in general for available variants of MSG. We propose here two new variants of MSG which enjoy both primal and dual convergence, as long as the dual optimal set is nonempty. These variants have a very simple choice for the stepsizes. Moreover, we also establish primal convergence when the dual optimal set is empty. Finally, our second variant of MSG converges in a finite number of steps.  相似文献   

5.
In this paper we present penalty and barrier methods for solving general convex semidefinite programming problems. More precisely, the constraint set is described by a convex operator that takes its values in the cone of negative semidefinite symmetric matrices. This class of methods is an extension of penalty and barrier methods for convex optimization to this setting. We provide implementable stopping rules and prove the convergence of the primal and dual paths obtained by these methods under minimal assumptions. The two parameters approach for penalty methods is also extended. As for usual convex programming, we prove that after a finite number of steps all iterates will be feasible.  相似文献   

6.
In this paper we develop a primal-dual subgradient algorithm for preferably decomposable, generally nondifferentiable, convex programming problems, under usual regularity conditions. The algorithm employs a Lagrangian dual function along with a suitable penalty function which satisfies a specified set of properties, in order to generate a sequence of primal and dual iterates for which some subsequence converges to a pair of primal-dual optimal solutions. Several classical types of penalty functions are shown to satisfy these specified properties. A geometric convergence rate is established for the algorithm under some additional assumptions. This approach has three principal advantages. Firstly, both primal and dual solutions are available which prove to be useful in several contexts. Secondly, the choice of step sizes, which plays an important role in subgradient optimization, is guided more determinably in this method via primal and dual information. Thirdly, typical subgradient algorithms suffer from the lack of an appropriate stopping criterion, and so the quality of the solution obtained after a finite number of steps is usually unknown. In contrast, by using the primal-dual gap, the proposed algorithm possesses a natural stopping criterion.  相似文献   

7.
《Optimization》2012,61(2):161-190
In the present article rather general penalty/barrier-methods (e.g. logarithmic barriers, SUMT, exponential penalties), which define a local continuously differentiable primal and dual path, are analyzed in case of strict local minima of nonlinear problems with inequality as well as equality constraints. In particular, the radius of convergence of Newton's method depending on the penalty/barrier-parameter is estimated. Unlike using self-concordance properties, the convergence bounds are derived by direct estimations of the solutions of the Newton equations. By means of the obtained results parameter selection rules are studied which guarantee the local convergence of the considered penalty/barrier-techniques with only a finite number of Newton steps at each parameter level. Numerical examples illustrate the practical behavior of the proposed class of methods.  相似文献   

8.
Consider the utilization of a Lagrangian dual method which is convergent for consistent convex optimization problems. When it is used to solve an infeasible optimization problem, its inconsistency will then manifest itself through the divergence of the sequence of dual iterates. Will then the sequence of primal subproblem solutions still yield relevant information regarding the primal program? We answer this question in the affirmative for a convex program and an associated subgradient algorithm for its Lagrange dual. We show that the primal–dual pair of programs corresponding to an associated homogeneous dual function is in turn associated with a saddle-point problem, in which—in the inconsistent case—the primal part amounts to finding a solution in the primal space such that the Euclidean norm of the infeasibility in the relaxed constraints is minimized; the dual part amounts to identifying a feasible steepest ascent direction for the Lagrangian dual function. We present convergence results for a conditional \(\varepsilon \)-subgradient optimization algorithm applied to the Lagrangian dual problem, and the construction of an ergodic sequence of primal subproblem solutions; this composite algorithm yields convergence of the primal–dual sequence to the set of saddle-points of the associated homogeneous Lagrangian function; for linear programs, convergence to the subset in which the primal objective is at minimum is also achieved.  相似文献   

9.
We present a primal-dual row-action method for the minimization of a convex function subject to general convex constraints. Constraints are used one at a time, no changes are made in the constraint functions and their Jacobian matrix (thus, the row-action nature of the algorithm), and at each iteration a subproblem is solved consisting of minimization of the objective function subject to one or two linear equations. The algorithm generates two sequences: one of them, called primal, converges to the solution of the problem; the other one, called dual, approximates a vector of optimal KKT multipliers for the problem. We prove convergence of the primal sequence for general convex constraints. In the case of linear constraints, we prove that the primal sequence converges at least linearly and obtain as a consequence the convergence of the dual sequence.The research of the first author was partially supported by CNPq Grant No. 301280/86.  相似文献   

10.
Linear Programming, LP, problems with finite optimal value have a zero duality gap and a primal–dual strictly complementary optimal solution pair. On the other hand, there exist Semidefinite Programming, SDP, problems which have a nonzero duality gap (different primal and dual optimal values; not both infinite). The duality gap is assured to be zero if a constraint qualification, e.g., Slater’s condition (strict feasibility) holds. Measures of strict feasibility, also called distance to infeasibility, have been used in complexity analysis, and, it is known that (near) loss of strict feasibility results in numerical difficulties. In addition, there exist SDP problems which have a zero duality gap but no strict complementary primal–dual optimal solution. We refer to these problems as hard instances of SDP. The assumption of strict complementarity is essential for asymptotic superlinear and quadratic rate convergence proofs. In this paper, we introduce a procedure for generating hard instances of SDP with a specified complementarity nullity (the dimension of the common nullspace of primal–dual optimal pairs). We then show, empirically, that the complementarity nullity correlates well with the observed local convergence rate and the number of iterations required to obtain optimal solutions to a specified accuracy, i.e., we show, even when Slater’s condition holds, that the loss of strict complementarity results in numerical difficulties. We include two new measures of hardness that correlate well with the complementarity nullity.  相似文献   

11.
In this paper, we consider convergence properties of a class of penalization methods for a general vector optimization problem with cone constraints in infinite dimensional spaces. Under certain assumptions, we show that any efficient point of the cone constrained vector optimization problem can be approached by a sequence of efficient points of the penalty problems. We also show, on the other hand, that any limit point of a sequence of approximate efficient solutions to the penalty problems is a weekly efficient solution of the original cone constrained vector optimization problem. Finally, when the constrained space is of finite dimension, we show that any limit point of a sequence of stationary points of the penalty problems is a KKT stationary point of the original cone constrained vector optimization problem if Mangasarian–Fromovitz constraint qualification holds at the limit point.This work is supported by the Postdoctoral Fellowship of Hong Kong Polytechnic University.  相似文献   

12.
We consider a primal optimization problem in a reflexive Banach space and a duality scheme via generalized augmented Lagrangians. For solving the dual problem (in a Hilbert space), we introduce and analyze a new parameterized Inexact Modified Subgradient (IMSg) algorithm. The IMSg generates a primal-dual sequence, and we focus on two simple new choices of the stepsize. We prove that every weak accumulation point of the primal sequence is a primal solution and the dual sequence converges weakly to a dual solution, as long as the dual optimal set is nonempty. Moreover, we establish primal convergence even when the dual optimal set is empty. Our second choice of the stepsize gives rise to a variant of IMSg which has finite termination.  相似文献   

13.
This paper is devoted to the study of optimal solutions of symmetric cone programs by means of the asymptotic behavior of central paths with respect to a broad class of barrier functions. This class is, for instance, larger than that typically found in the literature for semidefinite positive programming. In this general framework, we prove the existence and the convergence of primal, dual and primal–dual central paths. We are then able to establish concrete characterizations of the limit points of these central paths for specific subclasses. Indeed, for the class of barrier functions defined at the origin, we prove that the limit point of a primal central path minimizes the corresponding barrier function over the solution set of the studied symmetric cone program. In addition, we show that the limit points of the primal and dual central paths lie in the relative interior of the primal and dual solution sets for the case of the logarithm and modified logarithm barriers.  相似文献   

14.
In this paper, we first establish a general recession condition under which a semi-infinite convex program and its formal lagrangian dual have the same value. We go on to show that, under this condition, the following hold. First, every finite subprogram, with ‘enough’ of the given constraints, has the same value as its Lagrangian dual. Second, the weak value of the primal program is equal to the optimal value of the primal. The first draft of this work, entitled ‘Asymptotic Convex Programming’ was completed while the author was a member of the Department of Mathematical Sciences at the University of Delaware, Newark, DE 19711.  相似文献   

15.
In this paper, we present a simpler proof of the result of Tsuchiya and Muramatsu on the convergence of the primal affine scaling method. We show that the primal sequence generated by the method converges to the interior of the optimum face and the dual sequence to the analytic center of the optimal dual face, when the step size implemented in the procedure is bounded by 2/3. We also prove the optimality of the limit of the primal sequence for a slightly larger step size of 2q/(3q–1), whereq is the number of zero variables in the limit. We show this by proving the dual feasibility of a cluster point of the dual sequence.Partially supported by the grant CCR-9321550 from NSF.  相似文献   

16.
The convergence of primal and dual central paths associated to entropy and exponential functions, respectively, for semidefinite programming problem are studied in this paper. It is proved that the primal path converges to the analytic center of the primal optimal set with respect to the entropy function, the dual path converges to a point in the dual optimal set and the primal-dual path associated to this paths converges to a point in the primal-dual optimal set. As an application, the generalized proximal point method with the Kullback-Leibler distance applied to semidefinite programming problems is considered. The convergence of the primal proximal sequence to the analytic center of the primal optimal set with respect to the entropy function is established and the convergence of a particular weighted dual proximal sequence to a point in the dual optimal set is obtained.  相似文献   

17.
This paper proves local convergence rates of primal-dual interior point methods for general nonlinearly constrained optimization problems. Conditions to be satisfied at a solution are those given by the usual Jacobian uniqueness conditions. Proofs about convergence rates are given for three kinds of step size rules. They are: (i) the step size rule adopted by Zhang et al. in their convergence analysis of a primal-dual interior point method for linear programs, in which they used single step size for primal and dual variables; (ii) the step size rule used in the software package OB1, which uses different step sizes for primal and dual variables; and (iii) the step size rule used by Yamashita for his globally convergent primal-dual interior point method for general constrained optimization problems, which also uses different step sizes for primal and dual variables. Conditions to the barrier parameter and parameters in step size rules are given for each case. For these step size rules, local and quadratic convergence of the Newton method and local and superlinear convergence of the quasi-Newton method are proved. A preliminary version of this paper was presented at the conference “Optimization-Models and Algorithms” held at the Institute of Statistical Mathematics, Tokyo, March 1993.  相似文献   

18.
We study dual functionals which have two fundamental properties. Firstly, they have a good asymptotical behavior. Secondly, to each dual sequence of subgradients converging to zero, one can associate a primal sequence which converges to an optimal solution of the primal problem. Furthermore, minimal conditions for the convergence of the Gauss-Seidel methods are given and applied to such kinds of functionals.  相似文献   

19.
Levitin–Polyak well-posedness of constrained vector optimization problems   总被引:2,自引:0,他引:2  
In this paper, we consider Levitin–Polyak type well-posedness for a general constrained vector optimization problem. We introduce several types of (generalized) Levitin–Polyak well-posednesses. Criteria and characterizations for these types of well-posednesses are given. Relations among these types of well-posedness are investigated. Finally, we consider convergence of a class of penalty methods under the assumption of a type of generalized Levitin–Polyak well-posedness.  相似文献   

20.
We refine the speed of convergence analysis for the quadratic augmented penalty algorithm. We improve the convergence order from 4/3 to 3/2 for the first order multiplier iteration. For the second order iteration, we generalize the analysis, and consider a primal–dual variant which asymptotically reduces to a Newton step for the optimality conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号