首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 32 毫秒
1.
边界约束非凸二次规划问题的分枝定界方法   总被引:2,自引:0,他引:2  
本文是研究带有边界约束非凸二次规划问题,我们把球约束二次规划问题和线性约束凸二次规划问题作为子问题,分明引用了它们的一个求整体最优解的有效算法,我们提出几种定界的紧、松驰策略,给出了求解原问题整体最优解的分枝定界算法,并证明了该算法的收敛性,不同的定界组合就可以产生不同的分枝定界算法,最后我们简单讨论了一般有界凸域上非凸二次规划问题求整体最优解的分枝与定界思想。  相似文献   

2.
This article presents a global optimization algorithm for globally maximizing the sum of concave–convex ratios problem with a convex feasible region. The algorithm uses a branch and bound scheme where a concave envelope of the objective function is constructed to obtain an upper bound of the optimal value by using conical partition. As a result, the upper-bound subproblems during the algorithm search are all ordinary convex programs with less variables and constraints and do not grow in size from iterations to iterations in the computation procedure, and furthermore a new bounding tightening strategy is proposed such that the upper-bound convex relaxation subproblems are closer to the original nonconvex problem to enhance solution procedure. At last, some numerical examples are given to vindicate our conclusions.  相似文献   

3.
The problem of optimizing some contiuous function over the efficient set of a multiple objective programming problem can be formulated as a nonconvex global optimization problem with special structure. Based on the conical branch and bound algorithm in global optimization, we establish an algorithm for optimizing over efficient sets and discuss about the implementation of this algorithm for some interesting special cases including the case of biobjective programming problems.  相似文献   

4.
讨论一类带非凸不可微函数约束的非凸不可微规划的求解,提出一种基于分枝定界技巧的算法,该算法具有全局收敛性.  相似文献   

5.
A global optimization algorithm is presented for maximizing the sum of difference of convex functions ratios problem over nonconvex feasible region. This algorithm is based on branch and bound framework. To obtain a difference of convex programming, the considered problem is first reformulated by introducing new variables as few as possible. By using subgradient and convex envelope, the fundamental problem of estimating lower bound in the branch and bound algorithm is transformed into a relaxed linear programming problem which can be solved efficiently. Furthermore, the size of the relaxed linear programming problem does not change during the algorithm search. Lastly, the convergence of the algorithm is analyzed and the numerical results are reported.  相似文献   

6.
提出了一类求解带有箱约束的非凸二次规划的新型分支定界算法.首先,把原问题目标函数进行D.C.分解(分解为两个凸函数之差),利用次梯度方法,求出其线性下界逼近函数的一个最优值,也即原问题的一个下界.然后,利用全局椭球算法获得原问题的一个上界,并根据分支定界方法把原问题的求解转化为一系列子问题的求解.最后,理论上证明了算法的收敛性,数值算例表明算法是有效可行的.  相似文献   

7.
A branch and bound global optimization method,BB, for general continuous optimization problems involving nonconvexities in the objective function and/or constraints is presented. The nonconvexities are categorized as being either of special structure or generic. A convex relaxation of the original nonconvex problem is obtained by (i) replacing all nonconvex terms of special structure (i.e. bilinear, fractional, signomial) with customized tight convex lower bounding functions and (ii) by utilizing the parameter as defined in [17] to underestimate nonconvex terms of generic structure. The proposed branch and bound type algorithm attains finite-convergence to the global minimum through the successive subdivision of the original region and the subsequent solution of a series of nonlinear convex minimization problems. The global optimization method,BB, is implemented in C and tested on a variety of example problems.  相似文献   

8.
In this work we present a global optimization algorithm for solving a class of large-scale nonconvex optimization models that have a decomposable structure. Such models, which are very expensive to solve to global optimality, are frequently encountered in two-stage stochastic programming problems, engineering design, and also in planning and scheduling. A generic formulation and reformulation of the decomposable models is given. We propose a specialized deterministic branch-and-cut algorithm to solve these models to global optimality, wherein bounds on the global optimum are obtained by solving convex relaxations of these models with certain cuts added to them in order to tighten the relaxations. These cuts are based on the solutions of the sub-problems obtained by applying Lagrangean decomposition to the original nonconvex model. Numerical examples are presented to illustrate the effectiveness of the proposed method compared to available commercial global optimization solvers that are based on branch and bound methods.  相似文献   

9.
A global optimization method, QBB, for twice-differentiable NLPs (Non-Linear Programming) is developed to operate within a branch-and-bound framework and require the construction of a relaxed convex problem on the basis of the quadratic lower bounding functions for the generic nonconvex structures. Within an exhaustive simplicial division of the constrained region, the rigorous quadratic underestimation function is constructed for the generic nonconvex function structure by virtue of the maximal eigenvalue analysis of the interval Hessian matrix. Each valid lower bound of the NLP problem with the division progress is computed by the convex programming of the relaxed optimization problem obtained by preserving the convex or linear terms, replacing the concave term with linear convex envelope, underestimating the special terms and the generic terms by using their customized tight convex lower bounding functions or the valid quadratic lower bounding functions, respectively. The standard convergence properties of the QBB algorithm for nonconvex global optimization problems are guaranteed. The preliminary computation studies are presented in order to evaluate the algorithmic efficiency of the proposed QBB approach.  相似文献   

10.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

11.
A new algorithm to solve nonconvex NLP problems is presented. It is based on the solution of two problems. The reformulated problem RP is a suitable reformulation of the original problem and involves convex terms and concave univariate terms. The main problem MP is a nonconvex NLP that outer-approximates the feasible region and underestimate the objective function. MP involves convex terms and terms which are the products of concave univariate functions and new variables. Fixing the variables in the concave terms, a convex NLP that overestimates the feasible region and underestimates the objective function is obtained from the MP. Like most of the deterministic global optimization algorithms, bounds on all the variables in the nonconvex terms must be provided. MP forces the objective value to improve and minimizes the difference of upper and lower bound of all the variables either to zero or to a positive value. In the first case, a feasible solution of the original problem is reached and the objective function is improved. In general terms, the second case corresponds to an infeasible solution of the original problem due to the existence of gaps in some variables. A branching procedure is performed in order to either prove that there is no better solution or reduce the domain, eliminating the local solution of MP that was found. The MP solution indicates a key point to do the branching. A bound reduction technique is implemented to accelerate the convergence speed. Computational results demonstrate that the algorithm compares very favorably to other approaches when applied to test problems and process design problems. It is typically faster and it produces very accurate results.  相似文献   

12.
In this paper, a new global optimization method is proposed for an optimization problem with twice-differentiable objective and constraint functions of a single variable. The method employs a difference of convex underestimator and a convex cut function, where the former is a continuous piecewise concave quadratic function, and the latter is a convex quadratic function. The main objectives of this research are to determine a quadratic concave underestimator that does not need an iterative local optimizer to determine the lower bounding value of the objective function and to determine a convex cut function that effectively detects infeasible regions for nonconvex constraints. The proposed method is proven to have a finite ε-convergence to locate the global optimum point. The numerical experiments indicate that the proposed method competes with another covering method, the index branch-and-bound algorithm, which uses the Lipschitz constant.  相似文献   

13.
Several optimization schemes have been known for convex optimization problems. However, numerical algorithms for solving nonconvex optimization problems are still underdeveloped. A significant progress to go beyond convexity was made by considering the class of functions representable as differences of convex functions. In this paper, we introduce a generalized proximal point algorithm to minimize the difference of a nonconvex function and a convex function. We also study convergence results of this algorithm under the main assumption that the objective function satisfies the Kurdyka–?ojasiewicz property.  相似文献   

14.
Convex relaxations can be used to obtain lower bounds on the optimal objective function value of nonconvex quadratically constrained quadratic programs. However, for some problems, significantly better bounds can be obtained by minimizing the restricted Lagrangian function for a given estimate of the Lagrange multipliers. The difficulty in utilizing Lagrangian duality within a global optimization context is that the restricted Lagrangian is often nonconvex. Minimizing a convex underestimate of the restricted Lagrangian overcomes this difficulty and facilitates the use of Lagrangian duality within a global optimization framework. A branch-and-bound algorithm is presented that relies on these Lagrangian underestimates to provide lower bounds and on the interval Newton method to facilitate convergence in the neighborhood of the global solution. Computational results show that the algorithm compares favorably to the Reformulation–Linearization Technique for problems with a favorable structure.  相似文献   

15.
This article presents a simplicial branch and duality bound algorithm for globally solving the sum of convex–convex ratios problem with nonconvex feasible region. To our knowledge, little progress has been made for globally solving this problem so far. The algorithm uses a branch and bound scheme where the Lagrange duality theory is used to obtain the lower bounds. As a result, the lower-bounding subproblems during the algorithm search are all ordinary linear programs that can be solved very efficiently. It has been proved that the algorithm possesses global convergence. Finally, the numerical experiments are given to show the feasibility of the proposed algorithm.  相似文献   

16.
The problem of minimizing a convex function over the difference of two convex sets is called ‘reverse convex program’. This is a typical problem in global optimization, in which local optima are in general different from global optima. Another typical example in global optimization is the optimization problem over the efficient set of a multiple criteria programming problem. In this article, we investigate some special cases of optimization problems over the efficient set, which can be transformed equivalently into reverse convex programs in the space of so-called extreme criteria of multiple criteria programming problems under consideration. A suitable algorithm of branch and bound type is then established for globally solving resulting problems. Preliminary computational results with the proposed algorithm are reported.  相似文献   

17.
高岳林  张博 《计算数学》2020,42(2):207-222
本文旨在针对线性比式和规划这一NP-Hard非线性规划问题提出新的全局优化算法.首先,通过引入p个辅助变量把原问题等价的转化为一个非线性规划问题,这个非线性规划问题的目标函数是乘积和的形式并给原问题增加了p个新的非线性约束,再通过构造凸凹包络的技巧对等价问题的目标函数和约束条件进行相应的线性放缩,构成等价问题的一个下界线性松弛规划问题,从而提出了一个求解原问题的分支定界算法,并证明了算法的收敛性.最后,通过数值结果比较表明所提出的算法是可行有效的.  相似文献   

18.
Branch and bound approaches for nonconvex programming problems had been given in [1] and [4]. Crucial for both are the use of rectangular partitions, convex envelopes and separable nonconvex portions of the objective function and constraints. We want to propose a similar algorithm which solves a sequence of problems in each of which the objective function is convex or even linear. The main difference between this approach and previous approaches is the use of general compact partitions instead of rectangular ones and a different refining rule such that the algorithm does not rely on the concept of convex envelopes and handles non-separable functions.First we describe a general algorithm and prove a convergence theorem under suitable regularity assumptions. Then we give as example an algorithm for concave minimization problems.  相似文献   

19.
In this paper we propose a new branch and bound algorithm using a rectangular partition and ellipsoidal technique for minimizing a nonconvex quadratic function with box constraints. The bounding procedures are investigated by d.c. (difference of convex functions) optimization algorithms, called DCA. This is based upon the fact that the application of the DCA to the problems of minimizing a quadratic form over an ellipsoid and/or over a box is efficient. Some details of computational aspects of the algorithm are reported. Finally, numerical experiments on a lot of test problems showing the efficiency of our algorithm are presented.  相似文献   

20.
An important approach in multiple criteria linear programming is the optimization of some function over the efficient or weakly-efficient set. This is a very difficult nonconvex optimization problem, even for the case that the function to be optimized is linear. In this article we consider the problem of maximizing a concave function over the efficient or weakly-efficient set. We show that this problem can essentially be formulated as a special global optimization problem in the space of the extreme criteria of the underlying multiple criteria linear program. An algorithm of branch and bound type is proposed for solving the resulting problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号