首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposes an implementation of a constrained analytic center cutting plane method to solve nonlinear multicommodity flow problems. The new approach exploits the property that the objective of the Lagrangian dual problem has a smooth component with second order derivatives readily available in closed form. The cutting planes issued from the nonsmooth component and the epigraph set of the smooth component form a localization set that is endowed with a self-concordant augmented barrier. Our implementation uses an approximate analytic center associated with that barrier to query the oracle of the nonsmooth component. The paper also proposes an approximation scheme for the original objective. An active set strategy can be applied to the transformed problem: it reduces the dimension of the dual space and accelerates computations. The new approach solves huge instances with high accuracy. The method is compared to alternative approaches proposed in the literature. An erratum to this article can be found at  相似文献   

2.
We consider the inclusion of commitment of thermal generation units in the optimal management of the Brazilian power system. By means of Lagrangian relaxation we decompose the problem and obtain a nondifferentiable dual function that is separable. We solve the dual problem with a bundle method. Our purpose is twofold: first, bundle methods are the methods of choice in nonsmooth optimization when it comes to solve large-scale problems with high precision. Second, they give good starting points for recovering primal solutions. We use an inexact augmented Lagrangian technique to find a near-optimal primal feasible solution. We assess our approach with numerical results.  相似文献   

3.
We consider an inverse quadratic programming (QP) problem in which the parameters in the objective function of a given QP problem are adjusted as little as possible so that a known feasible solution becomes the optimal one. We formulate this problem as a minimization problem with a positive semidefinite cone constraint and its dual is a linearly constrained semismoothly differentiable (SC1) convex programming problem with fewer variables than the original one. We demonstrate the global convergence of the augmented Lagrangian method for the dual problem and prove that the convergence rate of primal iterates, generated by the augmented Lagrange method, is proportional to 1/r, and the rate of multiplier iterates is proportional to  $1/\sqrt{r}$ , where r is the penalty parameter in the augmented Lagrangian. As the objective function of the dual problem is a SC1 function involving the projection operator onto the cone of symmetrically semi-definite matrices, the analysis requires extensive tools such as the singular value decomposition of matrices, an implicit function theorem for semismooth functions, and properties of the projection operator in the symmetric-matrix space. Furthermore, the semismooth Newton method with Armijo line search is applied to solve the subproblems in the augmented Lagrange approach, which is proven to have global convergence and local quadratic rate. Finally numerical results, implemented by the augmented Lagrangian method, are reported.  相似文献   

4.
Yi Zhang  Liwei Zhang  Yue Wu 《TOP》2014,22(1):45-79
The focus of this paper is on studying an inverse second-order cone quadratic programming problem, in which the parameters in the objective function need to be adjusted as little as possible so that a known feasible solution becomes the optimal one. We formulate this problem as a minimization problem with cone constraints, and its dual, which has fewer variables than the original one, is a semismoothly differentiable (SC 1) convex programming problem with both a linear inequality constraint and a linear second-order cone constraint. We demonstrate the global convergence of the augmented Lagrangian method with an exact solution to the subproblem and prove that the convergence rate of primal iterates, generated by the augmented Lagrangian method, is proportional to 1/r, and the rate of multiplier iterates is proportional to $1/\sqrt{r}$ , where r is the penalty parameter in the augmented Lagrangian. Furthermore, a semismooth Newton method with Armijo line search is constructed to solve the subproblems in the augmented Lagrangian approach. Finally, numerical results are reported to show the effectiveness of the augmented Lagrangian method with both an exact solution and an inexact solution to the subproblem for solving the inverse second-order cone quadratic programming problem.  相似文献   

5.
A primal-dual version of the proximal point algorithm is developed for linearly constrained convex programming problems. The algorithm is an iterative method to find a saddle point of the Lagrangian of the problem. At each iteration of the algorithm, we compute an approximate saddle point of the Lagrangian function augmented by quadratic proximal terms of both primal and dual variables. Specifically, we first minimize the function with respect to the primal variables and then approximately maximize the resulting function of the dual variables. The merit of this approach exists in the fact that the latter function is differentiable and the maximization of this function is subject to no constraints. We discuss convergence properties of the algorithm and report some numerical results for network flow problems with separable quadratic costs.  相似文献   

6.
“Classical” First Order (FO) algorithms of convex optimization, such as Mirror Descent algorithm or Nesterov’s optimal algorithm of smooth convex optimization, are well known to have optimal (theoretical) complexity estimates which do not depend on the problem dimension. However, to attain the optimality, the domain of the problem should admit a “good proximal setup”. The latter essentially means that (1) the problem domain should satisfy certain geometric conditions of “favorable geometry”, and (2) the practical use of these methods is conditioned by our ability to compute at a moderate cost proximal transformation at each iteration. More often than not these two conditions are satisfied in optimization problems arising in computational learning, what explains why proximal type FO methods recently became methods of choice when solving various learning problems. Yet, they meet their limits in several important problems such as multi-task learning with large number of tasks, where the problem domain does not exhibit favorable geometry, and learning and matrix completion problems with nuclear norm constraint, when the numerical cost of computing proximal transformation becomes prohibitive in large-scale problems. We propose a novel approach to solving nonsmooth optimization problems arising in learning applications where Fenchel-type representation of the objective function is available. The approach is based on applying FO algorithms to the dual problem and using the accuracy certificates supplied by the method to recover the primal solution. While suboptimal in terms of accuracy guaranties, the proposed approach does not rely upon “good proximal setup” for the primal problem but requires the problem domain to admit a Linear Optimization oracle—the ability to efficiently maximize a linear form on the domain of the primal problem.  相似文献   

7.
In this paper, we have considered a nonsmooth multiobjective optimization problem where the objective and constraint functions involved are directionally differentiable. A new class of generalized functions (d???ρ???η???θ)-type I univex is introduced which generalizes many earlier classes cited in literature. Based upon these generalized functions, we have derived weak, strong, converse and strict converse duality theorems for mixed type multiobjective dual program in order to relate the efficient and weak efficient solutions of primal and dual problem.  相似文献   

8.
Nowadays, solving nonsmooth (not necessarily differentiable) optimization problems plays a very important role in many areas of industrial applications. Most of the algorithms developed so far deal only with nonsmooth convex functions. In this paper, we propose a new algorithm for solving nonsmooth optimization problems that are not assumed to be convex. The algorithm combines the traditional cutting plane method with some features of bundle methods, and the search direction calculation of feasible direction interior point algorithm (Herskovits, J. Optim. Theory Appl. 99(1):121–146, 1998). The algorithm to be presented generates a sequence of interior points to the epigraph of the objective function. The accumulation points of this sequence are solutions to the original problem. We prove the global convergence of the method for locally Lipschitz continuous functions and give some preliminary results from numerical experiments.  相似文献   

9.
Smooth methods of multipliers for complementarity problems   总被引:2,自引:0,他引:2  
This paper describes several methods for solving nonlinear complementarity problems. A general duality framework for pairs of monotone operators is developed and then applied to the monotone complementarity problem, obtaining primal, dual, and primal-dual formulations. We derive Bregman-function-based generalized proximal algorithms for each of these formulations, generating three classes of complementarity algorithms. The primal class is well-known. The dual class is new and constitutes a general collection of methods of multipliers, or augmented Lagrangian methods, for complementarity problems. In a special case, it corresponds to a class of variational inequality algorithms proposed by Gabay. By appropriate choice of Bregman function, the augmented Lagrangian subproblem in these methods can be made continuously differentiable. The primal-dual class of methods is entirely new and combines the best theoretical features of the primal and dual methods. Some preliminary computation shows that this class of algorithms is effective at solving many of the standard complementarity test problems. Received February 21, 1997 / Revised version received December 11, 1998? Published online May 12, 1999  相似文献   

10.
Duality formulations can be derived from a nonlinear primal optimization problem in several ways. One abstract theoretical concept presented by Johri is the framework of general dual problems. They provide the tightest of specific bounds on the primal optimum generated by dual subproblems which relax the primal problem with respect to the objective function or to the feasible set or even to both. The well-known Lagrangian dual and surrogate dual are shown to be special cases. Dominating functions and including sets which are the two relaxation devices of Johri's general dual turn out to be the most general formulations of augmented Lagrangian functions and augmented surrogate regions.  相似文献   

11.
This paper is concerned with a primal–dual interior point method for solving nonlinear semidefinite programming problems. The method consists of the outer iteration (SDPIP) that finds a KKT point and the inner iteration (SDPLS) that calculates an approximate barrier KKT point. Algorithm SDPLS uses a commutative class of Newton-like directions for the generation of line search directions. By combining the primal barrier penalty function and the primal–dual barrier function, a new primal–dual merit function is proposed. We prove the global convergence property of our method. Finally some numerical experiments are given.  相似文献   

12.
《Optimization》2012,61(4):717-738
Augmented Lagrangian duality provides zero duality gap and saddle point properties for nonconvex optimization. On the basis of this duality, subgradient-like methods can be applied to the (convex) dual of the original problem. These methods usually recover the optimal value of the problem, but may fail to provide a primal solution. We prove that the recovery of a primal solution by such methods can be characterized in terms of (i) the differentiability properties of the dual function and (ii) the exact penalty properties of the primal-dual pair. We also connect the property of finite termination with exact penalty properties of the dual pair. In order to establish these facts, we associate the primal-dual pair to a penalty map. This map, which we introduce here, is a convex and globally Lipschitz function and its epigraph encapsulates information on both primal and dual solution sets.  相似文献   

13.
In this paper we propose a primal-dual homotopy method for \(\ell _1\)-minimization problems with infinity norm constraints in the context of sparse reconstruction. The natural homotopy parameter is the value of the bound for the constraints and we show that there exists a piecewise linear solution path with finitely many break points for the primal problem and a respective piecewise constant path for the dual problem. We show that by solving a small linear program, one can jump to the next primal break point and then, solving another small linear program, a new optimal dual solution is calculated which enables the next such jump in the subsequent iteration. Using a theorem of the alternative, we show that the method never gets stuck and indeed calculates the whole path in a finite number of steps. Numerical experiments demonstrate the effectiveness of our algorithm. In many cases, our method significantly outperforms commercial LP solvers; this is possible since our approach employs a sequence of considerably simpler auxiliary linear programs that can be solved efficiently with specialized active-set strategies.  相似文献   

14.
This paper presents a stochastic algorithm with proper stopping rules for nonsmooth inequality-constrained minimization problems. The algorithm is based on an augmented Lagrangian dual problem transformed from a primal one, and it consists of two loops: an outer loop, which is the iteration for the approximate Lagrange multipliers, and an inner loop, which is a nonsmooth unconstrained minimization subroutine. Under mild assumptions, the algorithm is proved to be almost surely convergent.This work was partially supported by the Science Foundation of Ningbo University. The author is grateful to Professor D. Q. Mayne for his help with this work and to two referees for their helpful comments.  相似文献   

15.
We study the problem of minimizing a sum of Euclidean norms. This nonsmooth optimization problem arises in many different kinds of modern scientific applications. In this paper we first transform this problem and its dual problem into a system of strongly semismooth equations, and give some uniqueness theorems for this problem. We then present a primal–dual algorithm for this problem by solving this system of strongly semismooth equations. Preliminary numerical results are reported, which show that this primal–dual algorithm is very promising.  相似文献   

16.
We apply a modified subgradient algorithm (MSG) for solving the dual of a nonlinear and nonconvex optimization problem. The dual scheme we consider uses the sharp augmented Lagrangian. A desirable feature of this method is primal convergence, which means that every accumulation point of a primal sequence (which is automatically generated during the process), is a primal solution. This feature is not true in general for available variants of MSG. We propose here two new variants of MSG which enjoy both primal and dual convergence, as long as the dual optimal set is nonempty. These variants have a very simple choice for the stepsizes. Moreover, we also establish primal convergence when the dual optimal set is empty. Finally, our second variant of MSG converges in a finite number of steps.  相似文献   

17.
We propose a new class of incremental primal–dual techniques for solving nonlinear programming problems with special structure. Specifically, the objective functions of the problems are sums of independent nonconvex continuously differentiable terms minimized subject to a set of nonlinear constraints for each term. The technique performs successive primal–dual increments for each decomposition term of the objective function. The primal–dual increments are calculated by performing one Newton step towards the solution of the Karush–Kuhn–Tucker optimality conditions of each subproblem associated with each objective function term. We show that the resulting incremental algorithm is q-linearly convergent under mild assumptions for the original problem.  相似文献   

18.
We consider a primal optimization problem in a reflexive Banach space and a duality scheme via generalized augmented Lagrangians. For solving the dual problem (in a Hilbert space), we introduce and analyze a new parameterized Inexact Modified Subgradient (IMSg) algorithm. The IMSg generates a primal-dual sequence, and we focus on two simple new choices of the stepsize. We prove that every weak accumulation point of the primal sequence is a primal solution and the dual sequence converges weakly to a dual solution, as long as the dual optimal set is nonempty. Moreover, we establish primal convergence even when the dual optimal set is empty. Our second choice of the stepsize gives rise to a variant of IMSg which has finite termination.  相似文献   

19.
Several recent algorithms for solving nonlinear programming problems with equality constraints have made use of an augmented penalty Lagrangian function, where terms involving squares of the constraint functions are added to the ordinary Lagrangian. In this paper, the corresponding penalty Lagrangian for problems with inequality constraints is described, and its relationship with the theory of duality is examined. In the convex case, the modified dual problem consists of maximizing a differentiable concave function (indirectly defined) subject to no constraints at all. It is shown that any maximizing sequence for the dual can be made to yield, in a general way, an asymptotically minimizing sequence for the primal which typically converges at least as rapidly.Supported in part by the Air Force Office of Scientific Research under grant AF-AFOSR-72-2269.  相似文献   

20.
多目标分式规划逆对偶研究   总被引:1,自引:0,他引:1  
考虑了一类可微多目标分式规划问题.首先,建立原问题的两个对偶模型.随后,在相关文献的弱对偶定理基础上,利用Fritz John型必要条件,证明了相应的逆对偶定理.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号