首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
In this paper, a new global optimization method is proposed for an optimization problem with twice-differentiable objective and constraint functions of a single variable. The method employs a difference of convex underestimator and a convex cut function, where the former is a continuous piecewise concave quadratic function, and the latter is a convex quadratic function. The main objectives of this research are to determine a quadratic concave underestimator that does not need an iterative local optimizer to determine the lower bounding value of the objective function and to determine a convex cut function that effectively detects infeasible regions for nonconvex constraints. The proposed method is proven to have a finite ε-convergence to locate the global optimum point. The numerical experiments indicate that the proposed method competes with another covering method, the index branch-and-bound algorithm, which uses the Lipschitz constant.  相似文献   

2.
The purpose of this article is to develop a branch-and-bound algorithm using duality bounds for the general quadratically-constrained quadratic programming problem and having the following properties: (i) duality bounds are computed by solving ordinary linear programs; (ii) they are at least as good as the lower bounds obtained by solving relaxed problems, in which each nonconvex function is replaced by its convex envelope; (iii) standard convergence properties of branch-and-bound algorithms for nonconvex global optimization problems are guaranteed. Numerical results of preliminary computational experiments for the case of one quadratic constraint are reported.  相似文献   

3.
A branch and bound global optimization method,BB, for general continuous optimization problems involving nonconvexities in the objective function and/or constraints is presented. The nonconvexities are categorized as being either of special structure or generic. A convex relaxation of the original nonconvex problem is obtained by (i) replacing all nonconvex terms of special structure (i.e. bilinear, fractional, signomial) with customized tight convex lower bounding functions and (ii) by utilizing the parameter as defined in [17] to underestimate nonconvex terms of generic structure. The proposed branch and bound type algorithm attains finite-convergence to the global minimum through the successive subdivision of the original region and the subsequent solution of a series of nonlinear convex minimization problems. The global optimization method,BB, is implemented in C and tested on a variety of example problems.  相似文献   

4.
A global optimization algorithm is presented for maximizing the sum of difference of convex functions ratios problem over nonconvex feasible region. This algorithm is based on branch and bound framework. To obtain a difference of convex programming, the considered problem is first reformulated by introducing new variables as few as possible. By using subgradient and convex envelope, the fundamental problem of estimating lower bound in the branch and bound algorithm is transformed into a relaxed linear programming problem which can be solved efficiently. Furthermore, the size of the relaxed linear programming problem does not change during the algorithm search. Lastly, the convergence of the algorithm is analyzed and the numerical results are reported.  相似文献   

5.
The subject of this article is a class of global optimization problems, in which the variables can be divided into two groups such that, in each group, the functions involved have the same structure (e.g. linear, convex or concave, etc.). Based on the decomposition idea of Benders (Ref. 1), a corresponding master problem is defined on the space of one of the two groups of variables. The objective function of this master problem is in fact the optimal value function of a nonlinear parametric optimization problem. To solve the resulting master problem, a branch-and-bound scheme is proposed, in which the estimation of the lower bounds is performed by applying the well-known weak duality theorem in Lagrange duality. The results of this article concentrate on two subjects: investigating the convergence of the general algorithm and solving dual problems of some special classes of nonconvex optimization problems. Based on results in sensitivity and stability theory and in parametric optimization, conditions for the convergence are established by investigating the so-called dual properness property and the upper semicontinuity of the objective function of the master problem. The general algorithm is then discussed in detail for some nonconvex problems including concave minimization problems with a special structure, general quadratic problems, optimization problems on the efficient set, and linear multiplicative programming problems.  相似文献   

6.
We present a unified approach to establishing the existence of global minima of a (non)convex constrained optimization problem. Our results unify and generalize previous existence results for convex and nonconvex programs, including the Frank-Wolfe theorem, and for (quasi) convex quadratically constrained quadratic programs and convex polynomial programs. For example, instead of requiring the objective/constraint functions to be constant along certain recession directions, we only require them to linearly recede along these directions. Instead of requiring the objective/constraint functions to be convex polynomials, we only require the objective function to be a (quasi)convex polynomial over a polyhedral set and the constraint functions to be convex polynomials or the composition of coercive functions with linear mappings.We thank Professor Dimitri Bertsekas for his comments and support in the writing of this paper.  相似文献   

7.
边界约束非凸二次规划问题的分枝定界方法   总被引:2,自引:0,他引:2  
本文是研究带有边界约束非凸二次规划问题,我们把球约束二次规划问题和线性约束凸二次规划问题作为子问题,分明引用了它们的一个求整体最优解的有效算法,我们提出几种定界的紧、松驰策略,给出了求解原问题整体最优解的分枝定界算法,并证明了该算法的收敛性,不同的定界组合就可以产生不同的分枝定界算法,最后我们简单讨论了一般有界凸域上非凸二次规划问题求整体最优解的分枝与定界思想。  相似文献   

8.
A deterministic global optimization method is developed for a class of discontinuous functions. McCormick’s method to obtain relaxations of nonconvex functions is extended to discontinuous factorable functions by representing a discontinuity with a step function. The properties of the relaxations are analyzed in detail; in particular, convergence of the relaxations to the function is established given some assumptions on the bounds derived from interval arithmetic. The obtained convex relaxations are used in a branch-and-bound scheme to formulate lower bounding problems. Furthermore, convergence of the branch-and-bound algorithm for discontinuous functions is analyzed and assumptions are derived to guarantee convergence. A key advantage of the proposed method over reformulating the discontinuous problem as a MINLP or MPEC is avoiding the increase in problem size that slows global optimization. Several numerical examples for the global optimization of functions with discontinuities are presented, including ones taken from process design and equipment sizing as well as discrete-time hybrid systems.  相似文献   

9.
The problem of finding the best rank-one approximation to higher-order tensors has extensive engineering and statistical applications. It is well-known that this problem is equivalent to a homogeneous polynomial optimization problem. In this paper, we study theoretical results and numerical methods of this problem, particularly focusing on the 4-th order symmetric tensor case. First, we reformulate the polynomial optimization problem to a matrix programming, and show the equivalence between these two problems. Then, we prove that there is no duality gap between the reformulation and its Lagrangian dual problem. Concerning the approaches to deal with the problem, we propose two relaxed models. The first one is a convex quadratic matrix optimization problem regularized by the nuclear norm, while the second one is a quadratic matrix programming regularized by a truncated nuclear norm, which is a D.C. function and therefore is nonconvex. To overcome the difficulty of solving this nonconvex problem, we approximate the nonconvex penalty by a convex term. We propose to use the proximal augmented Lagrangian method to solve these two relaxed models. In order to obtain a global solution, we propose an alternating least eigenvalue method after solving the relaxed models and prove its convergence. Numerical results presented in the last demonstrate, especially for nonpositive tensors, the effectiveness and efficiency of our proposed methods.  相似文献   

10.
Natural basic concepts in multiple-objective optimization lead to difficult multiextremal global optimization problems. Examples include detection of efficient points when nonconvexities occur, and optimization of a linear function over the efficient set in the convex (even linear) case. Assuming that a utility function exists allows one to replace in general the multiple-objective program by a single, nonconvex optimization problem, which amounts to a minimization over the efficient set when the utility function is increasing. A new algorithm is discussed for this utility function program which, under natural mild conditions, converges to an -approximate global solution in a finite number of iterations. Applications include linear, convex, indefinite quadratic, Lipschitz, and d.c. objectives and constraints.  相似文献   

11.
In this paper, we present a global optimization method for solving nonconvex mixed integer nonlinear programming (MINLP) problems. A convex overestimation of the feasible region is obtained by replacing the nonconvex constraint functions with convex underestimators. For signomial functions single-variable power and exponential transformations are used to obtain the convex underestimators. For more general nonconvex functions two versions of the so-called αBB-underestimator, valid for twice-differentiable functions, are integrated in the actual reformulation framework. However, in contrast to what is done in branch-and-bound type algorithms, no direct branching is performed in the actual algorithm. Instead a piecewise convex reformulation is used to convexify the entire problem in an extended variable-space, and the reformulated problem is then solved by a convex MINLP solver. As the piecewise linear approximations are made finer, the solution to the convexified and overestimated problem will form a converging sequence towards a global optimal solution. The result is an easily-implementable algorithm for solving a very general class of optimization problems.  相似文献   

12.
In this paper we propose a new branch and bound algorithm using a rectangular partition and ellipsoidal technique for minimizing a nonconvex quadratic function with box constraints. The bounding procedures are investigated by d.c. (difference of convex functions) optimization algorithms, called DCA. This is based upon the fact that the application of the DCA to the problems of minimizing a quadratic form over an ellipsoid and/or over a box is efficient. Some details of computational aspects of the algorithm are reported. Finally, numerical experiments on a lot of test problems showing the efficiency of our algorithm are presented.  相似文献   

13.
In this paper, we consider an optimal zero-forcing beamformer design problem in multi-user multiple-input multiple-output broadcast channel. The minimum user rate is maximized subject to zero-forcing constraints and power constraint on each base station antenna array element. The natural formulation leads to a nonconvex optimization problem. This problem is shown to be equivalent to a convex optimization problem with linear objective function, linear equality and inequality constraints and quadratic inequality constraints. Here, the indirect elimination method is applied to reduce the convex optimization problem into an equivalent convex optimization problem of lower dimension with only inequality constraints. The primal-dual interior point method is utilized to develop an effective algorithm (in terms of computational efficiency) via solving the modified KKT equations with Newton method. Numerical simulations are carried out. Compared to algorithms based on a trust region interior point method and sequential quadratic programming method, it is observed that the method proposed is much superior in terms of computational efficiency.  相似文献   

14.
This paper addresses itself to the algorithm for minimizing the product of two nonnegative convex functions over a convex set. It is shown that the global minimum of this nonconvex problem can be obtained by solving a sequence of convex programming problems. The basic idea of this algorithm is to embed the original problem into a problem in a higher dimensional space and to apply a branch-and-bound algorithm using an underestimating function. Computational results indicate that our algorithm is efficient when the objective function is the product of a linear and a quadratic functions and the constraints are linear. An extension of our algorithm for minimizing the sum of a convex function and a product of two convex functions is also discussed.  相似文献   

15.
Motivated by the fact that important real-life problems, such as the protein docking problem, can be accurately modeled by minimizing a nonconvex piecewise-quadratic function, a nonconvex underestimator is constructed as the minimum of a finite number of strictly convex quadratic functions. The nonconvex underestimator is generated by minimizing a linear function on a reverse convex region and utilizes sample points from a given complex function to be minimized. The global solution of the piecewise-quadratic underestimator is known exactly and gives an approximation to the global minimum of the original function. Successive shrinking of the initial search region to which this procedure is applied leads to fairly accurate estimates, within 0.0060%, of the global minima of synthetic nonconvex functions for which the global minima are known. Furthermore, this process can approximate a nonconvex protein docking function global minimum within four-figure relative accuracy in six refinement steps. This is less than half the number of refinement steps required by previous models such as the convex kernel underestimator (Mangasarian et al., Computational Optimization and Applications, to appear) and produces higher accuracy here.  相似文献   

16.
A new algorithm to solve nonconvex NLP problems is presented. It is based on the solution of two problems. The reformulated problem RP is a suitable reformulation of the original problem and involves convex terms and concave univariate terms. The main problem MP is a nonconvex NLP that outer-approximates the feasible region and underestimate the objective function. MP involves convex terms and terms which are the products of concave univariate functions and new variables. Fixing the variables in the concave terms, a convex NLP that overestimates the feasible region and underestimates the objective function is obtained from the MP. Like most of the deterministic global optimization algorithms, bounds on all the variables in the nonconvex terms must be provided. MP forces the objective value to improve and minimizes the difference of upper and lower bound of all the variables either to zero or to a positive value. In the first case, a feasible solution of the original problem is reached and the objective function is improved. In general terms, the second case corresponds to an infeasible solution of the original problem due to the existence of gaps in some variables. A branching procedure is performed in order to either prove that there is no better solution or reduce the domain, eliminating the local solution of MP that was found. The MP solution indicates a key point to do the branching. A bound reduction technique is implemented to accelerate the convergence speed. Computational results demonstrate that the algorithm compares very favorably to other approaches when applied to test problems and process design problems. It is typically faster and it produces very accurate results.  相似文献   

17.
Motivated by weakly convex optimization and quadratic optimization problems, we first show that there is no duality gap between a difference of convex (DC) program over DC constraints and its associated dual problem. We then provide certificates of global optimality for a class of nonconvex optimization problems. As an application, we derive characterizations of robust solutions for uncertain general nonconvex quadratic optimization problems over nonconvex quadratic constraints.  相似文献   

18.
In this paper, we propose a branch-and-bound algorithm for finding a global optimal solution for a nonconvex quadratic program with convex quadratic constraints (NQPCQC). We first reformulate NQPCQC by adding some nonconvex quadratic constraints induced by eigenvectors of negative eigenvalues associated with the nonconvex quadratic objective function to Shor’s semidefinite relaxation. Under the assumption of having a bounded feasible domain, these nonconvex quadratic constraints can be further relaxed into linear ones to form a special semidefinite programming relaxation. Then an efficient branch-and-bound algorithm branching along the eigendirections of negative eigenvalues is designed. The theoretic convergence property and the worst-case complexity of the proposed algorithm are proved. Numerical experiments are conducted on several types of quadratic programs to show the efficiency of the proposed method.  相似文献   

19.
A crucial problem for many global optimization methods is how to handle partition sets whose feasibility is not known. This problem is solved for broad classes of feasible sets including convex sets, sets defined by finitely many convex and reverse convex constraints, and sets defined by Lipschitzian inequalities. Moreover, a fairly general theory of bounding is presented and applied to concave objective functions, to functions representable as differences of two convex functions, and to Lipschitzian functions. The resulting algorithms allow one to solve any global optimization problem whose objective function is of one of these forms and whose feasible set belongs to one of the above classes. In this way, several new fields of optimization are opened to the application of global methods.  相似文献   

20.
In this paper, we will develop an algorithm for solving a quadratic fractional programming problem which was recently introduced by Lo and MacKinlay to construct a maximal predictability portfolio, a new approach in portfolio analysis. The objective function of this problem is defined by the ratio of two convex quadratic functions, which is a typical global optimization problem with multiple local optima. We will show that a well-designed branch-and-bound algorithm using (i) Dinkelbach's parametric strategy, (ii) linear overestimating function and (iii) -subdivision strategy can solve problems of practical size in an efficient way. This algorithm is particularly efficient for Lo-MacKinlay's problem where the associated nonconvex quadratic programming problem has low rank nonconcave property.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号