首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
A class of nonconvex minimization problems can be classified as hidden convex minimization problems. A nonconvex minimization problem is called a hidden convex minimization problem if there exists an equivalent transformation such that the equivalent transformation of it is a convex minimization problem. Sufficient conditions that are independent of transformations are derived in this paper for identifying such a class of seemingly nonconvex minimization problems that are equivalent to convex minimization problems. Thus, a global optimality can be achieved for this class of hidden convex optimization problems by using local search methods. The results presented in this paper extend the reach of convex minimization by identifying its equivalent with a nonconvex representation.  相似文献   

2.
A global optimization method, QBB, for twice-differentiable NLPs (Non-Linear Programming) is developed to operate within a branch-and-bound framework and require the construction of a relaxed convex problem on the basis of the quadratic lower bounding functions for the generic nonconvex structures. Within an exhaustive simplicial division of the constrained region, the rigorous quadratic underestimation function is constructed for the generic nonconvex function structure by virtue of the maximal eigenvalue analysis of the interval Hessian matrix. Each valid lower bound of the NLP problem with the division progress is computed by the convex programming of the relaxed optimization problem obtained by preserving the convex or linear terms, replacing the concave term with linear convex envelope, underestimating the special terms and the generic terms by using their customized tight convex lower bounding functions or the valid quadratic lower bounding functions, respectively. The standard convergence properties of the QBB algorithm for nonconvex global optimization problems are guaranteed. The preliminary computation studies are presented in order to evaluate the algorithmic efficiency of the proposed QBB approach.  相似文献   

3.
This paper presents a global optimization approach for solving signomial geometric programming (SGP) problems. We employ an accelerated extended cutting plane (ECP) approach integrated with piecewise linear (PWL) approximations to solve the global optimization of SGP problems. In this approach, we separate the feasible regions determined by the constraints into convex and nonconvex ones in the logarithmic domain. In the nonconvex feasible regions, the corresponding constraint functions are converted into mixed integer linear constraints using PWL approximations, while the other constraints with convex feasible regions are handled by the ECP method. We also use pre-processed initial cuts and batched cuts to accelerate the proposed algorithm. Numerical results show that the proposed approach can solve the global optimization of SGP problems efficiently and effectively.  相似文献   

4.
边界约束非凸二次规划问题的分枝定界方法   总被引:2,自引:0,他引:2  
本文是研究带有边界约束非凸二次规划问题,我们把球约束二次规划问题和线性约束凸二次规划问题作为子问题,分明引用了它们的一个求整体最优解的有效算法,我们提出几种定界的紧、松驰策略,给出了求解原问题整体最优解的分枝定界算法,并证明了该算法的收敛性,不同的定界组合就可以产生不同的分枝定界算法,最后我们简单讨论了一般有界凸域上非凸二次规划问题求整体最优解的分枝与定界思想。  相似文献   

5.
Trade-off information related to Pareto optimal solutions is important in multiobjective optimization problems with conflicting objectives. Recently, the concept of trade-off directions has been introduced for convex problems. These trade-offs are characterized with the help of tangent cones. Generalized trade-off directions for nonconvex problems can be defined by replacing convex tangent cones with nonconvex contingent cones. Here we study how the convex concepts and results can be generalized into a nonconvex case. Giving up convexity naturally means that we need local instead of global analysis. Received: December 2000 / Accepted: October 2001?Published online February 14, 2002  相似文献   

6.
The subject of this article is a class of global optimization problems, in which the variables can be divided into two groups such that, in each group, the functions involved have the same structure (e.g. linear, convex or concave, etc.). Based on the decomposition idea of Benders (Ref. 1), a corresponding master problem is defined on the space of one of the two groups of variables. The objective function of this master problem is in fact the optimal value function of a nonlinear parametric optimization problem. To solve the resulting master problem, a branch-and-bound scheme is proposed, in which the estimation of the lower bounds is performed by applying the well-known weak duality theorem in Lagrange duality. The results of this article concentrate on two subjects: investigating the convergence of the general algorithm and solving dual problems of some special classes of nonconvex optimization problems. Based on results in sensitivity and stability theory and in parametric optimization, conditions for the convergence are established by investigating the so-called dual properness property and the upper semicontinuity of the objective function of the master problem. The general algorithm is then discussed in detail for some nonconvex problems including concave minimization problems with a special structure, general quadratic problems, optimization problems on the efficient set, and linear multiplicative programming problems.  相似文献   

7.
It is shown that, for very general classes of nonconvex global optimization problems, the duality gap obtained by solving a corresponding Lagrangian dual in reduced to zero in the limit when combined with suitably refined partitioning of the feasible set. A similar result holds for partly convex problems where exhaustive partitioning is applied only in the space of nonconvex variables. Applications include branch-and-bound approaches for linearly constrained problems where convex envelopes can be computed, certain generalized bilinear problems, linearly constrained optimization of the sum of ratios of affine functions, and concave minimization under reverse convex constraints.  相似文献   

8.
In this paper there is stated a result on sets in ordered linear spaces which can be used to show that some properties of the sets are inherited by their convex hulls under suitable conditions. As applications one gives a characterization of weakly efficient points and a duality result for nonconvex vector optimization problems.  相似文献   

9.
We describe a general scheme for solving nonconvex optimization problems, where in each iteration the nonconvex feasible set is approximated by an inner convex approximation. The latter is defined using an upper bound on the nonconvex constraint functions. Under appropriate conditions, a monotone convergence to a KKT point is established. The scheme is applied to truss topology design (TTD) problems, where the nonconvex constraints are associated with bounds on displacements and stresses. It is shown that the approximate convex problem solved at each inner iteration can be cast as a conic quadratic programming problem, hence large scale TTD problems can be efficiently solved by the proposed method.  相似文献   

10.
In this paper we develop a decomposition method using a pricing mechanism which has been widely applied to linear and convex programs for a class of nonconvex optimization problems that are min concave cost flow problems under directed, uncapacitated networks with a hierarchical structure.This paper was completed during the author's stay supported by a Sophia lecturing-research Grant at Sophia University, Tokyo, Japan.  相似文献   

11.
We use asymptotic analysis to develop finer estimates for the efficient, weak efficient and proper efficient solution sets (and for their asymptotic cones) to convex/quasiconvex vector optimization problems. We also provide a new representation for the efficient solution set without any convexity assumption, and the estimates involve the minima of the linear scalarization of the original vector problem. Some new necessary conditions for a point to be efficient or weak efficient solution for general convex vector optimization problems, as well as for the nonconvex quadratic multiobjective optimization problem, are established.  相似文献   

12.
Motivated by weakly convex optimization and quadratic optimization problems, we first show that there is no duality gap between a difference of convex (DC) program over DC constraints and its associated dual problem. We then provide certificates of global optimality for a class of nonconvex optimization problems. As an application, we derive characterizations of robust solutions for uncertain general nonconvex quadratic optimization problems over nonconvex quadratic constraints.  相似文献   

13.
In this paper, we present a global optimization method for solving nonconvex mixed integer nonlinear programming (MINLP) problems. A convex overestimation of the feasible region is obtained by replacing the nonconvex constraint functions with convex underestimators. For signomial functions single-variable power and exponential transformations are used to obtain the convex underestimators. For more general nonconvex functions two versions of the so-called αBB-underestimator, valid for twice-differentiable functions, are integrated in the actual reformulation framework. However, in contrast to what is done in branch-and-bound type algorithms, no direct branching is performed in the actual algorithm. Instead a piecewise convex reformulation is used to convexify the entire problem in an extended variable-space, and the reformulated problem is then solved by a convex MINLP solver. As the piecewise linear approximations are made finer, the solution to the convexified and overestimated problem will form a converging sequence towards a global optimal solution. The result is an easily-implementable algorithm for solving a very general class of optimization problems.  相似文献   

14.
We propose a trust-region type method for a class of nonsmooth nonconvex optimization problems where the objective function is a summation of a (probably nonconvex) smooth function and a (probably nonsmooth) convex function. The model function of our trust-region subproblem is always quadratic and the linear term of the model is generated using abstract descent directions. Therefore, the trust-region subproblems can be easily constructed as well as efficiently solved by cheap and standard methods. When the accuracy of the model function at the solution of the subproblem is not sufficient, we add a safeguard on the stepsizes for improving the accuracy. For a class of functions that can be "truncated'', an additional truncation step is defined and a stepsize modification strategy is designed. The overall scheme converges globally and we establish fast local convergence under suitable assumptions. In particular, using a connection with a smooth Riemannian trust-region method, we prove local quadratic convergence for partly smooth functions under a strict complementary condition. Preliminary numerical results on a family of $\ell_1$-optimization problems are reported and demonstrate the efficiency of our approach.  相似文献   

15.
《Optimization》2012,61(4):379-389
Formulas for computing the directional derivative of the optimal value function or of lower or upper bounds of it are well-known from literature. Because they have as a rule a minmax structure, methods from nondifferentiable optimization are required.

Considering a fully parametrized convex problem, in the paper the mentioned minmax formulas are transformed into usual programming problems. Although they are nonconvex in general, the computational effort is much lower than that for minmax problems. In several special cases, for instance, for linear least squares problems, linear programming problems arise.  相似文献   

16.
Many nonconvex nonlinear programming (NLP) problems of practical interest involve bilinear terms and linear constraints, as well as, potentially, other convex and nonconvex terms and constraints. In such cases, it may be possible to augment the formulation with additional linear constraints (a subset of Reformulation-Linearization Technique constraints) which do not affect the feasible region of the original NLP but tighten that of its convex relaxation to the extent that some bilinear terms may be dropped from the problem formulation. We present an efficient graph-theoretical algorithm for effecting such exact reformulations of large, sparse NLPs. The global solution of the reformulated problem using spatial Branch-and Bound algorithms is usually significantly faster than that of the original NLP. We illustrate this point by applying our algorithm to a set of pooling and blending global optimization problems.  相似文献   

17.
We propose a decomposition algorithm for a special class of nonconvex mixed integer nonlinear programming problems which have an assignment constraint. If the assignment decisions are decoupled from the remaining constraints of the optimization problem, we propose to use a column enumeration approach. The master problem is a partitioning problem whose objective function coefficients are computed via subproblems. These problems can be linear, mixed integer linear, (non-)convex nonlinear, or mixed integer nonlinear. However, the important property of the subproblems is that we can compute their exact global optimum quickly. The proposed technique will be illustrated solving a cutting problem with optimum nonlinear programming subproblems.  相似文献   

18.
Several optimization schemes have been known for convex optimization problems. However, numerical algorithms for solving nonconvex optimization problems are still underdeveloped. A significant progress to go beyond convexity was made by considering the class of functions representable as differences of convex functions. In this paper, we introduce a generalized proximal point algorithm to minimize the difference of a nonconvex function and a convex function. We also study convergence results of this algorithm under the main assumption that the objective function satisfies the Kurdyka–?ojasiewicz property.  相似文献   

19.
The main objective of this study is to discuss the optimum correction of linear inequality systems and absolute value equations (AVE). In this work, a simple and efficient feasible direction method will be provided for solving two fractional nonconvex minimization problems that result from the optimal correction of a linear system. We will show that, in some special-but frequently encountered-cases, we can solve convex optimization problems instead of not-necessarily-convex fractional problems. And, by using the method of feasible directions, we solve the optimal correction problem. Some examples are provided to illustrate the efficiency and validity of the proposed method.  相似文献   

20.
Convex relaxations can be used to obtain lower bounds on the optimal objective function value of nonconvex quadratically constrained quadratic programs. However, for some problems, significantly better bounds can be obtained by minimizing the restricted Lagrangian function for a given estimate of the Lagrange multipliers. The difficulty in utilizing Lagrangian duality within a global optimization context is that the restricted Lagrangian is often nonconvex. Minimizing a convex underestimate of the restricted Lagrangian overcomes this difficulty and facilitates the use of Lagrangian duality within a global optimization framework. A branch-and-bound algorithm is presented that relies on these Lagrangian underestimates to provide lower bounds and on the interval Newton method to facilitate convergence in the neighborhood of the global solution. Computational results show that the algorithm compares favorably to the Reformulation–Linearization Technique for problems with a favorable structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号