首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
边界约束非凸二次规划问题的分枝定界方法   总被引:2,自引:0,他引:2  
本文是研究带有边界约束非凸二次规划问题,我们把球约束二次规划问题和线性约束凸二次规划问题作为子问题,分明引用了它们的一个求整体最优解的有效算法,我们提出几种定界的紧、松驰策略,给出了求解原问题整体最优解的分枝定界算法,并证明了该算法的收敛性,不同的定界组合就可以产生不同的分枝定界算法,最后我们简单讨论了一般有界凸域上非凸二次规划问题求整体最优解的分枝与定界思想。  相似文献   

2.
Motivated by weakly convex optimization and quadratic optimization problems, we first show that there is no duality gap between a difference of convex (DC) program over DC constraints and its associated dual problem. We then provide certificates of global optimality for a class of nonconvex optimization problems. As an application, we derive characterizations of robust solutions for uncertain general nonconvex quadratic optimization problems over nonconvex quadratic constraints.  相似文献   

3.
We present a unified approach to establishing the existence of global minima of a (non)convex constrained optimization problem. Our results unify and generalize previous existence results for convex and nonconvex programs, including the Frank-Wolfe theorem, and for (quasi) convex quadratically constrained quadratic programs and convex polynomial programs. For example, instead of requiring the objective/constraint functions to be constant along certain recession directions, we only require them to linearly recede along these directions. Instead of requiring the objective/constraint functions to be convex polynomials, we only require the objective function to be a (quasi)convex polynomial over a polyhedral set and the constraint functions to be convex polynomials or the composition of coercive functions with linear mappings.We thank Professor Dimitri Bertsekas for his comments and support in the writing of this paper.  相似文献   

4.
A class of nonconvex minimization problems can be classified as hidden convex minimization problems. A nonconvex minimization problem is called a hidden convex minimization problem if there exists an equivalent transformation such that the equivalent transformation of it is a convex minimization problem. Sufficient conditions that are independent of transformations are derived in this paper for identifying such a class of seemingly nonconvex minimization problems that are equivalent to convex minimization problems. Thus, a global optimality can be achieved for this class of hidden convex optimization problems by using local search methods. The results presented in this paper extend the reach of convex minimization by identifying its equivalent with a nonconvex representation.  相似文献   

5.
We show the importance of exploiting the complementary convex structure for efficiently solving a wide class of specially structured nonconvex global optimization problems. Roughly speaking, a specific feature of these problems is that their nonconvex nucleus can be transformed into a complementary convex structure which can then be shifted to a subspace of much lower dimension than the original underlying space. This approach leads to quite efficient algorithms for many problems of practical interest, including linear and convex multiplicative programming problems, concave minimization problems with few nonlinear variables, bilevel linear optimization problems, etc...  相似文献   

6.
As a synchronization parallel framework, the parallel variable transformation (PVT) algorithm is effective to solve unconstrained optimization problems. In this paper, based on the idea that a constrained optimization problem is equivalent to a differentiable unconstrained optimization problem by introducing the Fischer Function, we propose an asynchronous PVT algorithm for solving large-scale linearly constrained convex minimization problems. This new algorithm can terminate when some processor satisfies terminal condition without waiting for other processors. Meanwhile, it can enhances practical efficiency for large-scale optimization problem. Global convergence of the new algorithm is established under suitable assumptions. And in particular, the linear rate of convergence does not depend on the number of processors.  相似文献   

7.
In this paper, we consider the class of linearly constrained nonconvex quadratic programming problems, and present a new approach based on a novel Reformulation-Linearization/Convexification Technique. In this approach, a tight linear (or convex) programming relaxation, or outer-approximation to the convex envelope of the objective function over the constrained region, is constructed for the problem by generating new constraints through the process of employing suitable products of constraints and using variable redefinitions. Various such relaxations are considered and analyzed, including ones that retain some useful nonlinear relationships. Efficient solution techniques are then explored for solving these relaxations in order to derive lower and upper bounds on the problem, and appropriate branching/partitioning strategies are used in concert with these bounding techniques to derive a convergent algorithm. Computational results are presented on a set of test problems from the literature to demonstrate the efficiency of the approach. (One of these test problems had not previously been solved to optimality). It is shown that for many problems, the initial relaxation itself produces an optimal solution.  相似文献   

8.
Several optimization schemes have been known for convex optimization problems. However, numerical algorithms for solving nonconvex optimization problems are still underdeveloped. A significant progress to go beyond convexity was made by considering the class of functions representable as differences of convex functions. In this paper, we introduce a generalized proximal point algorithm to minimize the difference of a nonconvex function and a convex function. We also study convergence results of this algorithm under the main assumption that the objective function satisfies the Kurdyka–?ojasiewicz property.  相似文献   

9.
In this paper, we design a numerical algorithm for solving a simple bilevel program where the lower level program is a nonconvex minimization problem with a convex set constraint. We propose to solve a combined problem where the first order condition and the value function are both present in the constraints. Since the value function is in general nonsmooth, the combined problem is in general a nonsmooth and nonconvex optimization problem. We propose a smoothing augmented Lagrangian method for solving a general class of nonsmooth and nonconvex constrained optimization problems. We show that, if the sequence of penalty parameters is bounded, then any accumulation point is a Karush-Kuch-Tucker (KKT) point of the nonsmooth optimization problem. The smoothing augmented Lagrangian method is used to solve the combined problem. Numerical experiments show that the algorithm is efficient for solving the simple bilevel program.  相似文献   

10.
Abstract

We present an interior proximal method for solving constrained nonconvex optimization problems where the objective function is given by the difference of two convex function (DC function). To this end, we consider a linearized proximal method with a proximal distance as regularization. Convergence analysis of particular choices of the proximal distance as second-order homogeneous proximal distances and Bregman distances are considered. Finally, some academic numerical results are presented for a constrained DC problem and generalized Fermat–Weber location problems.  相似文献   

11.
The Lagrangian function in the conventional theory for solving constrained optimization problems is a linear combination of the cost and constraint functions. Typically, the optimality conditions based on linear Lagrangian theory are either necessary or sufficient, but not both unless the underlying cost and constraint functions are also convex.We propose a somewhat different approach for solving a nonconvex inequality constrained optimization problem based on a nonlinear Lagrangian function. This leads to optimality conditions which are both sufficient and necessary, without any convexity assumption. Subsequently, under appropriate assumptions, the optimality conditions derived from the new nonlinear Lagrangian approach are used to obtain an equivalent root-finding problem. By appropriately defining a dual optimization problem and an alternative dual problem, we show that zero duality gap will hold always regardless of convexity, contrary to the case of linear Lagrangian duality.  相似文献   

12.
The existence of a saddle point in nonconvex constrained optimization problems is considered in this paper. We show that, under some mild conditions, the existence of a saddle point can be ensured in an equivalent p-th power formulation for a general class of nonconvex constrained optimization problems. This result expands considerably the class of optimization problems where a saddle point exists and thus enlarges the family of nonconvex problems that can be solved by dual-search methods.  相似文献   

13.
In Akrotirianakis and Floudas (2004) we presented the theoretical foundations of a new class of convex underestimators for C 2 nonconvex functions. In this paper, we present computational experience with those underestimators incorporated within a Branch-and-Bound algorithm for box-conatrained problems. The algorithm can be used to solve global optimization problems that involve C 2 functions. We discuss several ways of incorporating the convex underestimators within a Branch-and-Bound framework. The resulting Branch-and-Bound algorithm is then used to solve a number of difficult box-constrained global optimization problems. A hybrid algorithm is also introduced, which incorporates a stochastic algorithm, the Random-Linkage method, for the solution of the nonconvex underestimating subproblems, arising within a Branch-and-Bound framework. The resulting algorithm also solves efficiently the same set of test problems.  相似文献   

14.
Using the concept of a subdifferential of a vector-valued convex mapping, we provide duality formulas for the minimization of nonconvex composite functions and related optimization problems such as the minimization of a convex function over a vectorial DC constraint.  相似文献   

15.
We develop a convergence theory for convex and linearly constrained trust region methods which only requires that the step between iterates produce a sufficient reduction in the trust region subproblem. Global convergence is established for general convex constraints while the local analysis is for linearly constrained problems. The main local result establishes that if the sequence converges to a nondegenerate stationary point then the active constraints at the solution are identified in a finite number of iterations. As a consequence of the identification properties, we develop rate of convergence results by assuming that the step is a truncated Newton method. Our development is mainly geometrical; this approach allows the development of a convergence theory without any linear independence assumptions.Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department of Energy under Contract W-31-109-Eng-38.Work supported in part by the National Science Foundation grant DMS-8803206 and by the Air Force Office of Scientific Research grant AFSOR-860080.  相似文献   

16.
A global optimization method, QBB, for twice-differentiable NLPs (Non-Linear Programming) is developed to operate within a branch-and-bound framework and require the construction of a relaxed convex problem on the basis of the quadratic lower bounding functions for the generic nonconvex structures. Within an exhaustive simplicial division of the constrained region, the rigorous quadratic underestimation function is constructed for the generic nonconvex function structure by virtue of the maximal eigenvalue analysis of the interval Hessian matrix. Each valid lower bound of the NLP problem with the division progress is computed by the convex programming of the relaxed optimization problem obtained by preserving the convex or linear terms, replacing the concave term with linear convex envelope, underestimating the special terms and the generic terms by using their customized tight convex lower bounding functions or the valid quadratic lower bounding functions, respectively. The standard convergence properties of the QBB algorithm for nonconvex global optimization problems are guaranteed. The preliminary computation studies are presented in order to evaluate the algorithmic efficiency of the proposed QBB approach.  相似文献   

17.
We describe a general scheme for solving nonconvex optimization problems, where in each iteration the nonconvex feasible set is approximated by an inner convex approximation. The latter is defined using an upper bound on the nonconvex constraint functions. Under appropriate conditions, a monotone convergence to a KKT point is established. The scheme is applied to truss topology design (TTD) problems, where the nonconvex constraints are associated with bounds on displacements and stresses. It is shown that the approximate convex problem solved at each inner iteration can be cast as a conic quadratic programming problem, hence large scale TTD problems can be efficiently solved by the proposed method.  相似文献   

18.
线性约束的凸优化问题和鞍点问题的一阶最优性条件是一个单调变分不等式. 在变分不等式框架下求解这些问题, 选取适当的矩阵G, 采用G- 模下的PPA 算法, 会使迭代过程中的子问题求解变得相当容易. 本文证明这类定制的PPA 算法的误差界有1/k 的收敛速率.  相似文献   

19.
We consider the problem of minimizing a convex functionf(x) under Lipschitz constraintsf i (x)0,i=1,...,m. By transforming a system of Lipschitz constraintsf i (x)0,i=l,...,m, into a single constraints of the formh(x)-x20, withh(·) being a closed convex function, we convert the problem into a convex program with an additional reverse convex constraint. Under a regularity assumption, we apply Tuy's method for convex programs with an additional reverse convex constraint to solve the converted problem. By this way, we construct an algorithm which reduces the problem to a sequence of subproblems of minimizing a concave, quadratic, separable function over a polytope. Finally, we show how the algorithm can be used for the decomposition of Lipschitz optimization problems involving relatively few nonconvex variables.  相似文献   

20.
We consider the augmented Lagrangian method (ALM) for constrained optimization problems in the presence of convex inequality and convex abstract constraints. We focus on the case where the Lagrangian sub-problems are solved up to approximate stationary points, with increasing accuracy. We analyze two different criteria of approximate stationarity for the sub-problems and we prove the global convergence to stationary points of ALM in both cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号