首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we consider the problem of locating one new facility in the plane with respect to a given set of existing facilities where a set of polyhedral barriers restricts traveling. This non-convex optimization problem can be reduced to a finite set of convex subproblems if the objective function is a convex function of the travel distances between the new and the existing facilities (like e.g. the median and center objective functions). An exact algorithm and a heuristic solution procedure based on this reduction result are developed.  相似文献   

2.
This paper presents exact and heuristic solution procedures for a multiproduct capacitated facility location (MPCFL) problem in which the demand for a number of different product families must be supplied from a set of facility sites, and each site offers a choice of facility types exhibiting different capacities. MPCFL generalizes both the uncapacitated (or simple) facility location (UFL) problem and the pure-integer capacitated facility location problem. We define a branch-and-bound algorithm for MPCFL that utilizes bounds formed by a Lagrangian relaxation of MPCFL which decomposes the problem into UFL subproblems and easily solvable 0-1 knapsack subproblems. The UFL subproblems are solved by the dual-based procedure of Erlenkotter. We also present a subgradient optimization-Lagrangian relaxation-based heuristic for MPCFL. Computational experience with the algorithm and heuristic are reported. The MPCFL heuristic is seen to be extremely effective, generating solutions to the test problems that are on average within 2% of optimality, and the branch-and-bound algorithm is successful in solving all of the test problems to optimality.  相似文献   

3.
This article presents for the first time an algorithm specifically designed for globally minimizing a finite, convex function over the weakly efficient set of a multiple objective nonlinear programming problem (V1) that has both nonlinear objective functions and a convex, nonpolyhedral feasible region. The algorithm uses a branch and bound search in the outcome space of problem (V1), rather than in the decision space of the problem, to find a global optimal solution. Since the dimension of the outcome space is usually much smaller than the dimension of the decision space, often by one or more orders of magnitude, this approach can be expected to considerably shorten the search. In addition, the algorithm can be easily modified to obtain an approximate global optimal weakly efficient solution after a finite number of iterations. Furthermore, all of the subproblems that the algorithm must solve can be easily solved, since they are all convex programming problems. The key, and sometimes quite interesting, convergence properties of the algorithm are proven, and an example problem is solved.  相似文献   

4.
This article presents a global optimization algorithm for globally maximizing the sum of concave–convex ratios problem with a convex feasible region. The algorithm uses a branch and bound scheme where a concave envelope of the objective function is constructed to obtain an upper bound of the optimal value by using conical partition. As a result, the upper-bound subproblems during the algorithm search are all ordinary convex programs with less variables and constraints and do not grow in size from iterations to iterations in the computation procedure, and furthermore a new bounding tightening strategy is proposed such that the upper-bound convex relaxation subproblems are closer to the original nonconvex problem to enhance solution procedure. At last, some numerical examples are given to vindicate our conclusions.  相似文献   

5.
In this paper, we propose a decomposition-based branch-and-bound (DBAB) algorithm for solving two-stage stochastic programs having mixed-integer first- and second-stage variables. A modified Benders' decomposition method is developed, where the Benders' subproblems define lower bounding second-stage value functions of the first-stage variables that are derived by constructing a certain partial convex hull representation of the two-stage solution space. This partial convex hull is sequentially generated using a convexification scheme such as the Reformulation-Linearization Technique (RLT) or lift-and-project process, which yields valid inequalities that are reusable in the subsequent subproblems by updating the values of the first-stage variables. A branch-and-bound algorithm is designed based on a hyperrectangular partitioning process, using the established property that any resulting lower bounding Benders' master problem defined over a hyperrectangle yields the same objective value as the original stochastic program over that region if the first-stage variable solution is an extreme point of the defining hyperrectangle or the second-stage solution satisfies the binary restrictions. We prove that this algorithm converges to a global optimal solution. Some numerical examples and computational results are presented to demonstrate the efficacy of this approach.  相似文献   

6.
An iterative linear programming algorithm for the solution of the convex programming problem is proposed. The algorithm partially solves a sequence of linear programming subproblems whose solution is shown to converge quadratically, superlinearly, or linearly to the solution of the convex program, depending on the accuracy to which the subproblems are solved. The given algorithm is related to inexact Newton methods for the nonlinear complementarity problem. Preliminary results for an implementation of the algorithm are given.This material is based on research supported by the National Science Foundation, Grants DCR-8521228 and CCR-8723091, and by the Air Force Office of Scientific Research, Grant AFOSR-86-0172. The author would like to thank Professor O. L. Mangasarian for stimulating discussions during the preparation of this paper.  相似文献   

7.
Dinkelbach's algorithm was developed to solve convex fractinal programming. This method achieves the optimal solution of the optimisation problem by means of solving a sequence of non-linear convex programming subproblems defined by a parameter. In this paper it is shown that Dinkelbach's algorithm can be used to solve general fractional programming. The applicability of the algorithm will depend on the possibility of solving the subproblems. Dinkelbach's extended algorithm is a framework to describe several algorithms which have been proposed to solve linear fractional programming, integer linear fractional programming, convex fractional programming and to generate new algorithms. The applicability of new cases as nondifferentiable fractional programming and quadratic fractional programming has been studied. We have proposed two modifications to improve the speed-up of Dinkelbachs algorithm. One is to use interpolation formulae to update the parameter which defined the subproblem and another truncates the solution of the suproblem. We give sufficient conditions for the convergence of these modifications. Computational experiments in linear fractional programming, integer linear fractional programming and non-linear fractional programming to evaluate the efficiency of these methods have been carried out.  相似文献   

8.
A stratified random sampling plan is one in which the elements of the population are first divided into nonoverlapping groups, and then a simple random sample is selected from each group. In this paper, we focus on determining the optimal sample size of each group. We show that various versions of this problem can be transformed into a particular nonlinear program with a convex objective function, a single linear constraint, and bounded variables. Two branch and bound algorithms are presented for solving the problem. The first algorithm solves the transformed subproblems in the branch and bound tree using a variable pegging procedure. The second algorithm solves the subproblems by performing a search to identify the optimal Lagrange multiplier of the single constraint. We also present linearization and dynamic programming methods that can be used for solving the stratified sampling problem. Computational testing indicates that the pegging branch and bound algorithm is fastest for some classes of problems, and the linearization method is fastest for other classes of problems.  相似文献   

9.
In this paper, we consider an optimization problem which aims to minimize a convex function over the weakly efficient set of a multiobjective programming problem. To solve such a problem, we propose an inner approximation algorithm, in which two kinds of convex subproblems are solved successively. These convex subproblems are fairly easy to solve and therefore the proposed algorithm is practically useful. The algorithm always terminates after finitely many iterations by compromising the weak efficiency to a multiobjective programming problem. Moreover, for a subproblem which is solved at each iteration of the algorithm, we suggest a procedure for eliminating redundant constraints.  相似文献   

10.
The paper formulates an extended model of Weber problem in which the customers are represented by convex demand regions. The objective is to generate a site in R 2 that minimizes the sum of weighted Euclidean distances between the new facility and the farthest points of demand regions. This location problem is decomposed into a polynomial number of subproblems: constrained Weber problems (CWPs). A projection contraction method is suggested to solve these CWPs. An algorithm and the complexity analysis are presented. Three techniques of bound test, greedy choice and choice of starting point are adopted to reduce the computational time. The restricted case of the facility is also considered. Preliminary computational results are reported, which shows that with the above three techniques adopted the algorithm is efficient.The authors were supported by the dissertation fund of Nari-Relays Corporation.  相似文献   

11.
Well known extensions of the classical transportation problem are obtained by including fixed costs for the production of goods at the supply points (facility location) and/or by introducing stochastic demand, modeled by convex nonlinear costs, at the demand points (the stochastic transportation problem, [STP]). However, the simultaneous use of concave and convex costs is not very well treated in the literature. Economies of scale often yield concave cost functions other than fixed charges, so in this paper we consider a problem with general concave costs at the supply points, as well as convex costs at the demand points. The objective function can then be represented as the difference of two convex functions, and is therefore called a d.c. function. We propose a solution method which reduces the problem to a d.c. optimization problem in a much smaller space, then solves the latter by a branch and bound procedure in which bounding is based on solving subproblems of the form of [STP]. We prove convergence of the method and report computational tests that indicate that quite large problems can be solved efficiently. Problems up to the size of 100 supply points and 500 demand points are solved. Received October 11, 1993 / Revised version received July 31, 1995 Published online November 24, 1998  相似文献   

12.
A customized Douglas-Rachford splitting method (DRSM) was recently proposed to solve two-block separable convex optimization problems with linear constraints and simple abstract constraints. The algorithm has advantage over the well-known alternating direction method of multipliers (ADMM), the dual application of DRSM to the two-block convex minimization problem, in the sense that the subproblems can have larger opportunity of possessing closed-form solutions since they are unconstrained. In this paper, we further study along this way by considering the primal application of DRSM for the general case m≥3, i.e., we consider the multi-block separable convex minimization problem with linear constraints where the objective function is separable into m individual convex functions without coupled variables. The resulting method fully exploits the separable structure and enjoys decoupled subproblems which can be solved simultaneously. Both the exact and inexact versions of the new method are presented in a unified framework. Under mild conditions, we manage to prove the global convergence of the algorithm. Preliminary numerical experiments for extracting the background from corrupted surveillance video verify the encouraging efficiency of the new algorithm.  相似文献   

13.
We propose a decomposition algorithm for a special class of nonconvex mixed integer nonlinear programming problems which have an assignment constraint. If the assignment decisions are decoupled from the remaining constraints of the optimization problem, we propose to use a column enumeration approach. The master problem is a partitioning problem whose objective function coefficients are computed via subproblems. These problems can be linear, mixed integer linear, (non-)convex nonlinear, or mixed integer nonlinear. However, the important property of the subproblems is that we can compute their exact global optimum quickly. The proposed technique will be illustrated solving a cutting problem with optimum nonlinear programming subproblems.  相似文献   

14.
A new deterministic algorithm for solving convex mixed-integer nonlinear programming (MINLP) problems is presented in this paper: The extended supporting hyperplane (ESH) algorithm uses supporting hyperplanes to generate a tight overestimated polyhedral set of the feasible set defined by linear and nonlinear constraints. A sequence of linear or quadratic integer-relaxed subproblems are first solved to rapidly generate a tight linear relaxation of the original MINLP problem. After an initial overestimated set has been obtained the algorithm solves a sequence of mixed-integer linear programming or mixed-integer quadratic programming subproblems and refines the overestimated set by generating more supporting hyperplanes in each iteration. Compared to the extended cutting plane algorithm ESH generates a tighter overestimated set and unlike outer approximation the generation point for the supporting hyperplanes is found by a simple line search procedure. In this paper it is proven that the ESH algorithm converges to a global optimum for convex MINLP problems. The ESH algorithm is implemented as the supporting hyperplane optimization toolkit (SHOT) solver, and an extensive numerical comparison of its performance against other state-of-the-art MINLP solvers is presented.  相似文献   

15.
We consider a matrix approximation problem arising in the study of entanglement in quantum physics. This notion represents a certain type of correlations between subsystems in a composite quantum system. The states of a system are described by a density matrix, which is a positive semidefinite matrix with trace one. The goal is to approximate such a given density matrix by a so-called separable density matrix, and the distance between these matrices gives information about the degree of entanglement in the system. Separability here is expressed in terms of tensor products. We discuss this approximation problem for a composite system with two subsystems and show that it can be written as a convex optimization problem with special structure. We investigate related convex sets, and suggest an algorithm for this approximation problem which exploits the tensor product structure in certain subproblems. Finally some computational results and experiences are presented.  相似文献   

16.
This paper presents a feasible direction algorithm for the minimization of a pseudoconvex function over a smooth, compact, convex set. We establish that each cluster point of the generated sequence is an optimal solution of the problem without introducing anti-jamming procedures. Each iteration of the algorithm involves as subproblems only one line search for a zero of a continuously differentiable convex function and one univariate function minimization on a compact interval.  相似文献   

17.
In this paper we consider cardinality-constrained convex programs that minimize a convex function subject to a cardinality constraint and other linear constraints. This class of problems has found many applications, including portfolio selection, subset selection and compressed sensing. We propose a successive convex approximation method for this class of problems in which the cardinality function is first approximated by a piecewise linear DC function (difference of two convex functions) and a sequence of convex subproblems is then constructed by successively linearizing the concave terms of the DC function. Under some mild assumptions, we establish that any accumulation point of the sequence generated by the method is a KKT point of the DC approximation problem. We show that the basic algorithm can be refined by adding strengthening cuts in the subproblems. Finally, we report some preliminary computational results on cardinality-constrained portfolio selection problems.  相似文献   

18.
A method is presented for solving the finite nonlinear min-max problem. Quasi-Newton methods are used to approximately solve a sequence of differentiable subproblems where, for each subproblem, the cost function to minimize is a global regularization underestimating the finite maximum function. Every cluster point of the sequence generated is shown to be a stationary point of the min-max problem and therefore, in the convex case, to be a solution of the problem. Moreover, numerical results are given for a large set of test problems which show that the method is efficient in practice.  相似文献   

19.
A convergent decomposition algorithm for support vector machines   总被引:1,自引:0,他引:1  
In this work we consider nonlinear minimization problems with a single linear equality constraint and box constraints. In particular we are interested in solving problems where the number of variables is so huge that traditional optimization methods cannot be directly applied. Many interesting real world problems lead to the solution of large scale constrained problems with this structure. For example, the special subclass of problems with convex quadratic objective function plays a fundamental role in the training of Support Vector Machine, which is a technique for machine learning problems. For this particular subclass of convex quadratic problem, some convergent decomposition methods, based on the solution of a sequence of smaller subproblems, have been proposed. In this paper we define a new globally convergent decomposition algorithm that differs from the previous methods in the rule for the choice of the subproblem variables and in the presence of a proximal point modification in the objective function of the subproblems. In particular, the new rule for sequentially selecting the subproblems appears to be suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence of the algorithm for the general case of nonconvex objective function. Furthermore, we report some preliminary numerical results on support vector classification problems with up to 100 thousands variables.  相似文献   

20.
In this paper, we introduce the notion of a weak sharp set of solutions to a variational inequality problem (VIP) in a reflexive, strictly convex and smooth Banach space, and present its several equivalent conditions. We also prove, under some continuity and monotonicity assumptions, that if any sequence generated by an algorithm for solving (VIP) converges to a weak sharp solution, then we can obtain solutions for (VIP) by solving a finite number of convex optimization subproblems with linear objective. Moreover, in order to characterize finite convergence of an iterative algorithm, we introduce the notion of a weak subsharp set of solutions to a variational inequality problem (VIP), which is more general than that of weak sharp solutions in Hilbert spaces. We establish a sufficient and necessary condition for the finite convergence of an algorithm for solving (VIP) which satisfies that the sequence generated by which converges to a weak subsharp solution of (VIP), and show that the proximal point algorithm satisfies this condition. As a consequence, we prove that the proximal point algorithm possesses finite convergence whenever the sequence generated by which converges to a weak subsharp solution of (VIP).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号