首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
Lagrangian bounds, i.e. bounds computed by Lagrangian relaxation, have been used successfully in branch and bound bound methods for solving certain classes of nonconvex optimization problems by reducing the duality gap. We discuss this method for the class of partly linear and partly convex optimization problems and, incidentally, point out incorrect results in the recent literature on this subject.  相似文献   

2.
The aim of this paper is to present a nonconvex duality with a zero gap and its connection with convex duality. Since a convex program can be regarded as a particular case of convex maximization over a convex set, a nonconvex duality can be regarded as a generalization of convex duality. The generalized duality can be obtained on the basis of convex duality and minimax theorems. The duality with a zero gap can be extended to a more general nonconvex problems such as a quasiconvex maximization over a general nonconvex set or a general minimization over the complement of a convex set. Several applications are given.On leave from the Institute of Mathematics, Hanoi, Vietnam.  相似文献   

3.
The purpose of this article is to develop a branch-and-bound algorithm using duality bounds for the general quadratically-constrained quadratic programming problem and having the following properties: (i) duality bounds are computed by solving ordinary linear programs; (ii) they are at least as good as the lower bounds obtained by solving relaxed problems, in which each nonconvex function is replaced by its convex envelope; (iii) standard convergence properties of branch-and-bound algorithms for nonconvex global optimization problems are guaranteed. Numerical results of preliminary computational experiments for the case of one quadratic constraint are reported.  相似文献   

4.
Motivated by weakly convex optimization and quadratic optimization problems, we first show that there is no duality gap between a difference of convex (DC) program over DC constraints and its associated dual problem. We then provide certificates of global optimality for a class of nonconvex optimization problems. As an application, we derive characterizations of robust solutions for uncertain general nonconvex quadratic optimization problems over nonconvex quadratic constraints.  相似文献   

5.
Abstract

In this paper, we consider multiobjective semi-infinite optimization problems which are defined in a finite-dimensional space by finitely many objective functions and infinitely many inequality constraints. We present duality results both for the convex and nonconvex case. In particular, we show weak, strong and converse duality with respect to both efficiency and weak efficiency. Moreover, the property of being a locally properly efficient point plays a crucial role in the nonconvex case.  相似文献   

6.
Zhao  Chen  Luo  Ziyan  Li  Weiyue  Qi  Houduo  Xiu  Naihua 《中国科学 数学(英文版)》2019,62(10):2015-2032
The sparse linear programming(SLP) is a linear programming problem equipped with a sparsity constraint, which is nonconvex, discontinuous and generally NP-hard due to the combinatorial property involved.In this paper, by rewriting the sparsity constraint into a disjunctive form, we present an explicit formula of the Lagrangian dual problem for the SLP, in terms of an unconstrained piecewise-linear convex programming problem which admits a strong duality under bi-dual sparsity consistency. Furthermore, we show a saddle point theorem based on the strong duality and analyze two classes of stationary points for the saddle point problem. At last,we extend these results to SLP with the lower bound zero replaced by a certain negative constant.  相似文献   

7.
Lagrangian relaxation is often an efficient tool to solve (large-scale) optimization problems, even nonconvex. However it introduces a duality gap, which should be small for the method to be really efficient. Here we make a geometric study of the duality gap. Given a nonconvex problem, we formulate in a first part a convex problem having the same dual. This formulation involves a convexification in the product of the three spaces containing respectively the variables, the objective and the constraints. We apply our results to several relaxation schemes, especially one called “Lagrangean decomposition” in the combinatorial-optimization community, or “operator splitting” elsewhere. We also study a specific application, highly nonlinear: the unit-commitment problem. Received: June 1997 / Accepted: December 2000?Published online April 12, 2001  相似文献   

8.

We study convex relaxations of nonconvex quadratic programs. We identify a family of so-called feasibility preserving convex relaxations, which includes the well-known copositive and doubly nonnegative relaxations, with the property that the convex relaxation is feasible if and only if the nonconvex quadratic program is feasible. We observe that each convex relaxation in this family implicitly induces a convex underestimator of the objective function on the feasible region of the quadratic program. This alternative perspective on convex relaxations enables us to establish several useful properties of the corresponding convex underestimators. In particular, if the recession cone of the feasible region of the quadratic program does not contain any directions of negative curvature, we show that the convex underestimator arising from the copositive relaxation is precisely the convex envelope of the objective function of the quadratic program, strengthening Burer’s well-known result on the exactness of the copositive relaxation in the case of nonconvex quadratic programs. We also present an algorithmic recipe for constructing instances of quadratic programs with a finite optimal value but an unbounded relaxation for a rather large family of convex relaxations including the doubly nonnegative relaxation.

  相似文献   

9.
Since the standard multi knapsack problem, may be rewritten as a reverse convex problem, we present a global optimization approach. It is known from solving high dimensional nonconvex problems that pure cutting plane methods may fail and branch-and-bound is impractical, due to a large duality gap. On the other hand, a strategy based on some sufficient optimality condition does not help much because it requires generating all level set points, an intractable problem. Therefore, we propose to combine both a cutting plane method and a sufficient optimality condition together with a random generation of level set points where the number of points is limited by a tabu list to prevent re-examination of the same level set area. Experiments show that we end up with a small duality gap allowing a subsequent branch-and-bound approach to prove optimality.  相似文献   

10.
This paper presents a method to estimate the bounds of the radius of the feasible space for a class of constrained nonconvex quadratic programmings. Results show that one may compute a bound of the radius of the feasible space by a linear programming which is known to be a PP-problem [N. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica 4 (1984) 373–395]. It is proposed that one applies this method for using the canonical dual transformation [D.Y. Gao, Canonical duality theory and solutions to constrained nonconvex quadratic programming, J. Global Optimization 29 (2004) 377–399] for solving a standard quadratic programming problem.  相似文献   

11.
《Optimization》2012,61(4):717-738
Augmented Lagrangian duality provides zero duality gap and saddle point properties for nonconvex optimization. On the basis of this duality, subgradient-like methods can be applied to the (convex) dual of the original problem. These methods usually recover the optimal value of the problem, but may fail to provide a primal solution. We prove that the recovery of a primal solution by such methods can be characterized in terms of (i) the differentiability properties of the dual function and (ii) the exact penalty properties of the primal-dual pair. We also connect the property of finite termination with exact penalty properties of the dual pair. In order to establish these facts, we associate the primal-dual pair to a penalty map. This map, which we introduce here, is a convex and globally Lipschitz function and its epigraph encapsulates information on both primal and dual solution sets.  相似文献   

12.
A global optimization method, QBB, for twice-differentiable NLPs (Non-Linear Programming) is developed to operate within a branch-and-bound framework and require the construction of a relaxed convex problem on the basis of the quadratic lower bounding functions for the generic nonconvex structures. Within an exhaustive simplicial division of the constrained region, the rigorous quadratic underestimation function is constructed for the generic nonconvex function structure by virtue of the maximal eigenvalue analysis of the interval Hessian matrix. Each valid lower bound of the NLP problem with the division progress is computed by the convex programming of the relaxed optimization problem obtained by preserving the convex or linear terms, replacing the concave term with linear convex envelope, underestimating the special terms and the generic terms by using their customized tight convex lower bounding functions or the valid quadratic lower bounding functions, respectively. The standard convergence properties of the QBB algorithm for nonconvex global optimization problems are guaranteed. The preliminary computation studies are presented in order to evaluate the algorithmic efficiency of the proposed QBB approach.  相似文献   

13.
We present a branch and bound algorithm for the global optimization of a twice differentiable nonconvex objective function with a Lipschitz continuous Hessian over a compact, convex set. The algorithm is based on applying cubic regularisation techniques to the objective function within an overlapping branch and bound algorithm for convex constrained global optimization. Unlike other branch and bound algorithms, lower bounds are obtained via nonconvex underestimators of the function. For a numerical example, we apply the proposed branch and bound algorithm to radial basis function approximations.  相似文献   

14.
A global optimization algorithm is presented for maximizing the sum of difference of convex functions ratios problem over nonconvex feasible region. This algorithm is based on branch and bound framework. To obtain a difference of convex programming, the considered problem is first reformulated by introducing new variables as few as possible. By using subgradient and convex envelope, the fundamental problem of estimating lower bound in the branch and bound algorithm is transformed into a relaxed linear programming problem which can be solved efficiently. Furthermore, the size of the relaxed linear programming problem does not change during the algorithm search. Lastly, the convergence of the algorithm is analyzed and the numerical results are reported.  相似文献   

15.
针对非凸区域上的凸函数比式和问题,给出一种求其全局最优解的确定性方法.该方法基于分支定界框架.首先通过引入变量,将原问题等价转化为d.c.规划问题,然后利用次梯度和凸包络构造松弛线性规划问题,从而将关键的估计下界问题转化为一系列线性规划问题,这些线性规划易于求解而且规模不变,更容易编程实现和应用到实际中;分支采用单纯形对分不但保证其穷举性,而且使得线性规划规模更小.理论分析和数值实验表明所提出的算法可行有效.  相似文献   

16.
In this paper a duality framework is discussed for the problem of optimizing a nonconvex quadratic function over an ellipsoid. Additional insight is obtained from the observation that this nonconvex problem is in a sense equivalent to a convex problem of the same type, from which known necessary and sufficient conditions for optimality readily follow. Based on the duality results, some existing solution procedures are interpreted as in fact solving the dual. The duality relations are also shown to provide a natural framework for sensitivity analysis.  相似文献   

17.
Convex relaxations can be used to obtain lower bounds on the optimal objective function value of nonconvex quadratically constrained quadratic programs. However, for some problems, significantly better bounds can be obtained by minimizing the restricted Lagrangian function for a given estimate of the Lagrange multipliers. The difficulty in utilizing Lagrangian duality within a global optimization context is that the restricted Lagrangian is often nonconvex. Minimizing a convex underestimate of the restricted Lagrangian overcomes this difficulty and facilitates the use of Lagrangian duality within a global optimization framework. A branch-and-bound algorithm is presented that relies on these Lagrangian underestimates to provide lower bounds and on the interval Newton method to facilitate convergence in the neighborhood of the global solution. Computational results show that the algorithm compares favorably to the Reformulation–Linearization Technique for problems with a favorable structure.  相似文献   

18.
The problem of finding the best rank-one approximation to higher-order tensors has extensive engineering and statistical applications. It is well-known that this problem is equivalent to a homogeneous polynomial optimization problem. In this paper, we study theoretical results and numerical methods of this problem, particularly focusing on the 4-th order symmetric tensor case. First, we reformulate the polynomial optimization problem to a matrix programming, and show the equivalence between these two problems. Then, we prove that there is no duality gap between the reformulation and its Lagrangian dual problem. Concerning the approaches to deal with the problem, we propose two relaxed models. The first one is a convex quadratic matrix optimization problem regularized by the nuclear norm, while the second one is a quadratic matrix programming regularized by a truncated nuclear norm, which is a D.C. function and therefore is nonconvex. To overcome the difficulty of solving this nonconvex problem, we approximate the nonconvex penalty by a convex term. We propose to use the proximal augmented Lagrangian method to solve these two relaxed models. In order to obtain a global solution, we propose an alternating least eigenvalue method after solving the relaxed models and prove its convergence. Numerical results presented in the last demonstrate, especially for nonpositive tensors, the effectiveness and efficiency of our proposed methods.  相似文献   

19.
Using the concept of a subdifferential of a vector-valued convex mapping, we provide duality formulas for the minimization of nonconvex composite functions and related optimization problems such as the minimization of a convex function over a vectorial DC constraint.  相似文献   

20.
We present a decomposition-approximation method for generating convex relaxations for nonconvex quadratically constrained quadratic programming (QCQP). We first develop a general conic program relaxation for QCQP based on a matrix decomposition scheme and polyhedral (piecewise linear) underestimation. By employing suitable matrix cones, we then show that the convex conic relaxation can be reduced to a semidefinite programming (SDP) problem. In particular, we investigate polyhedral underestimations for several classes of matrix cones, including the cones of rank-1 and rank-2 matrices, the cone generated by the coefficient matrices, the cone of positive semidefinite matrices and the cones induced by rank-2 semidefinite inequalities. We demonstrate that in general the new SDP relaxations can generate lower bounds at least as tight as the best known SDP relaxations for QCQP. Moreover, we give examples for which tighter lower bounds can be generated by the new SDP relaxations. We also report comparison results of different convex relaxation schemes for nonconvex QCQP with convex quadratic/linear constraints, nonconvex quadratic constraints and 0–1 constraints.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号