首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the Capacitated Clustering Problem (CCP), a given set of n weighted points is to be partitioned into p clusters such that, the total weight of the points in each cluster does not exceed a given cluster capacity. The objective is to find a set of p centers that minimises total scatter of points allocated to them. In this paper a new constructive method, a general framework to improve the performance of greedy constructive heuristics, and a problem space search procedure for the CCP are proposed. The constructive heuristic finds patterns of natural subgrouping in the input data using concept of density of points. Elements of adaptive computation and periodic construction–deconstruction concepts are implemented within the constructive heuristic to develop a general framework for building efficient heuristics. The problem-space search procedure is based on perturbations of input data for which a controlled perturbation strategy, intensification and diversification strategies are developed. The implemented algorithms are compared with existing methods on a standard set of bench-marks and on new sets of large-sized instances. The results illustrate the strengths of our algorithms in terms of solution quality and computational efficiency.  相似文献   

2.
A new trust region technique for the maximum weight clique problem   总被引:2,自引:0,他引:2  
A new simple generalization of the Motzkin-Straus theorem for the maximum weight clique problem is formulated and directly proved. Within this framework a trust region heuristic is developed. In contrast to usual trust region methods, it regards not only the global optimum of a quadratic objective over a sphere, but also a set of other stationary points of the program. We formulate and prove a condition when a Motzkin-Straus optimum coincides with such a point. The developed method has complexity O(n3), where n is the number of vertices of the graph. It was implemented in a publicly available software package QUALEX-MS.Computational experiments indicate that the algorithm is exact on small graphs and very efficient on the DIMACS benchmark graphs and various random maximum weight clique problem instances.  相似文献   

3.
We consider the k-Hyperplane Clustering problem where, given a set of m   points in RnRn, we have to partition the set into k subsets (clusters) and determine a hyperplane for each of them, so as to minimize the sum of the squares of the Euclidean distances between the points and the hyperplane of the corresponding clusters. We give a nonconvex mixed-integer quadratically constrained quadratic programming formulation for the problem. Since even very small-size instances are challenging for state-of-the-art spatial branch-and-bound solvers like Couenne, we propose a heuristic in which many “critical” points are reassigned at each iteration. Such points, which are likely to be ill-assigned in the current solution, are identified using a distance-based criterion and their number is progressively decreased to zero. Our algorithm outperforms the best available one proposed by Bradley and Mangasarian on a set of real-world and structured randomly generated instances. For the largest instances, we obtain an average improvement in the solution quality of 54%.  相似文献   

4.
An instance of a p-median problem gives n demand points. The objective is to locate p supply points in order to minimize the total distance of the demand points to their nearest supply point. p-Median is polynomially solvable in one dimension but NP-hard in two or more dimensions, when either the Euclidean or the rectilinear distance measure is used. In this paper, we treat the p-median problem under a new distance measure, the directional rectilinear distance, which requires the assigned supply point for a given demand point to lie above and to the right of it. In a previous work, we showed that the directional p-median problem is polynomially solvable in one dimension; we give here an improved solution through reformulating the problem as a special case of the constrained shortest path problem. We have previously proven that the problem is NP-complete in two or more dimensions; we present here an efficient heuristic to solve it. Compared to the robust Teitz and Bart heuristic, our heuristic enjoys substantial speedup while sacrificing little in terms of solution quality, making it an ideal choice for real-world applications with thousands of demand points.  相似文献   

5.
Given an undirected graph G=(V,E) with vertex set V={1,??,n} and edge set E?V×V. Let w:V??Z + be a weighting function that assigns to each vertex i??V a positive integer. The maximum weight clique problem (MWCP) is to determine a clique of maximum weight. This paper introduces a tabu search heuristic whose key features include a combined neighborhood and a dedicated tabu mechanism using a randomized restart strategy for diversification. The proposed algorithm is evaluated on a total of 136 benchmark instances from different sources (DIMACS, BHOSLIB and set packing). Computational results disclose that our new tabu search algorithm outperforms the leading algorithm for the maximum weight clique problem, and in addition rivals the performance of the best methods for the unweighted version of the problem without being specialized to exploit this problem class.  相似文献   

6.
Let B be a boundary of a compact convex set X. We prove that the weight of X equals the weight of B provided B is Lindelöf or X is a standard compact convex set and B is the set of extreme points of X.  相似文献   

7.
In this paper a new heuristic hybrid technique for bound-constrained global optimization is proposed. We developed iterative algorithm called GLPτS that uses genetic algorithms, LPτ low-discrepancy sequences of points and heuristic rules to find regions of attraction when searching a global minimum of an objective function. Subsequently Nelder–Mead Simplex local search technique is used to refine the solution. The combination of the three techniques (Genetic algorithms, LPτO Low-discrepancy search and Simplex search) provides a powerful hybrid heuristic optimization method which is tested on a number of benchmark multimodal functions with 10–150 dimensions, and the method properties – applicability, convergence, consistency and stability are discussed in detail.  相似文献   

8.
The linear models for the approximate solution of the problem of packing the maximum number of equal circles of the given radius into a given closed bounded domain G are proposed. We construct a grid in G; the nodes of this grid form a finite set of points T, and it is assumed that the centers of circles to be packed can be placed only at the points of T. The packing problems of equal circles with the centers at the points of T are reduced to 0–1 linear programming problems. A heuristic algorithm for solving the packing problems based on linear models is proposed. This algorithm makes it possible to solve packing problems for arbitrary connected closed bounded domains independently of their shape in a unified manner. Numerical results demonstrating the effectiveness of this approach are presented.  相似文献   

9.
We examine an on-line polynomial time heuristic for the NP-complete problem of scheduling m identical machines to minimize the maximum lateness of jobs with release times and due dates. The main result is that for any set of jobs, the difference between the lateness given by the heuristic and the minimum possible lateness is bounded by [(2m ? 1)m]Pmax, where Pmax is the largest processing time of any of the jobs. The conclusion is that for many applications, optimization is not necessary. We also prove that when all jobs have a common processing time P, the difference between the heuristic and the minimum lateness is bounded by P. This gives a fast estimate of the minimum possible lateness, and is useful in accelerating the practical performance of algorithms for the exact solution.  相似文献   

10.
Given a set of m linear equations in n unknowns with the requirement that the solution space be nonnegative, a simple, heuristic proof is offered which shows that the extreme points of the set of feasible solutions are also basic feasible solutions. This proof can be used in many text treatments of Linear Programming which omit the proof on the grounds that it is too difficult to prove.  相似文献   

11.
The maximin LHD problem calls for arranging N points in a k-dimensional grid so that no pair of points share a coordinate and the distance of the closest pair of points is as large as possible. In this paper we propose to tackle this problem by heuristic algorithms belonging to the Iterated Local Search (ILS) family and show through some computational experiments that the proposed algorithms compare very well with different heuristic approaches in the established literature.  相似文献   

12.
A ball spans a set of n points when none of the points lie outside it. In Zarrabi-Zadeh and Chan (Proceedings of the 18th Canadian conference on computational geometry (CCCG’06), pp 139–142, 2006) proposed an algorithm to compute an approximate spanning ball in the streaming model of computation, and showed that the radius of the approximate ball is within 3/2 of the minimum. Spurred by this, in this paper we consider the 2-dimensional extension of this result: computation of spanning ellipses. The ball algorithm is simple to the point of being trivial, but the extension of the algorithm to ellipses is non-trivial. Surprisingly, the area of the approximate ellipse computed by this approach is not within a constant factor of the minimum and we provide an elegant proof of this. We have implemented this algorithm, and experiments with a variety of inputs, except for a very pathological one, show that it can nevertheless serve as a good heuristic for computing an approximate ellipse.  相似文献   

13.
Given points in Euclidean space of arbitrary dimension, we prove that there exists a spanning tree having no vertices of degree greater than 3 with weight at most 1.559 times the weight of the minimum spanning tree. We also prove that there is a set of points such that no spanning tree of maximal degree 3 exists that has this ratio be less than 1.447. Our central result is based on the proof of the following claim: Given n points in Euclidean space with one special point v, there exists a Hamiltonian path with an endpoint at v that is at most 1.559 times longer than the sum of the distances of the points to v. These proofs also lead to a way to find the tree in linear time given the minimal spanning tree.  相似文献   

14.
Consider a single machine and a set of n jobs that are available for processing at time 0. Job j has a processing time pj, a due date dj and a weight wj. We consider bi-criteria scheduling problems involving the maximum weighted tardiness and the number of tardy jobs. We give NP-hardness proofs for the scheduling problems when either one of the two criteria is the primary criterion and the other one is the secondary criterion. These results answer two open questions posed by Lee and Vairaktarakis in 1993. We consider complexity relationships between the various problems, give polynomial-time algorithms for some special cases, and propose fast heuristics for the general case. The effectiveness of the heuristics is measured by empirical study. Our results show that one heuristic performs extremely well compared to optimal solutions.  相似文献   

15.
Probabilistically constrained problems, in which the random variables are finitely distributed, are non-convex in general and hard to solve. The p-efficiency concept has been widely used to develop efficient methods to solve such problems. Those methods require the generation of p-efficient points (pLEPs) and use an enumeration scheme to identify pLEPs. In this paper, we consider a random vector characterized by a finite set of scenarios and generate pLEPs by solving a mixed-integer programming (MIP) problem. We solve this computationally challenging MIP problem with a new mathematical programming framework. It involves solving a series of increasingly tighter outer approximations and employs, as algorithmic techniques, a bundle preprocessing method, strengthening valid inequalities, and a fixing strategy. The method is exact (resp., heuristic) and ensures the generation of pLEPs (resp., quasi pLEPs) if the fixing strategy is not (resp., is) employed, and it can be used to generate multiple pLEPs. To the best of our knowledge, generating a set of pLEPs using an optimization-based approach and developing effective methods for the application of the p-efficiency concept to the random variables described by a finite set of scenarios are novel. We present extensive numerical results that highlight the computational efficiency and effectiveness of the overall framework and of each of the specific algorithmic techniques.  相似文献   

16.
The capacitated $p$ -median problem (CPMP) is one of the well-known facility-location problems. The objective of the problem is to minimize total cost of locating a set of capacitated service points and allocating a set of demand points to the located service points, while the total allocated demand for each service point is not be greater than its capacity limit. This paper presents an efficient heuristic algorithm based on the local branching and relaxation induced neighborhood search methods for the CPMP. The proposed algorithm is a heuristic technique that utilizes a general mixed integer programming solver to explore neighborhoods. The parameters of the proposed algorithm are tuned by design of experiments. The proposed method is tested on a large set of benchmark instances. The results show that the method outperforms the best method found in the literature.  相似文献   

17.
We consider quadratic programs with pure general integer variables. The objective function is quadratic and convex and the constraints are linear. An exact solution approach is proposed. It is decomposed into two phases. In the first phase, the initial problem is reformulated into an equivalent problem with a separable objective function. This is done by use of a Gauss decomposition of the Hessian matrix of the initial problem and requires the addition of some continuous variables and constraints. In the second phase, the reformulated problem is linearized by an approximation of each squared term by a set of K linear functions that correspond to the tangents of a hyperbola in K points. We give a proof of the intuitive property that when K is large enough, the optimal value of the obtained linear program is very close to optimal value of the two previous problems, the initial problem and the reformulated separable problem. The reminder is dedicated to the implementation of a branch-and-bound algorithm for the solution of linearized problem, and its application to a set of instances. Several points are considered among which choice of the right value for parameter K and the implementation of a sophisticated heuristic solution algorithm. The numerical comparison is done with CPLEX 12.2 since, in this case, the initial problem as well as the problem reformulated by the first step can be solved by CPLEX. We show that with our approach, the total CPU time is divided by a factor ranging from 1.2 to 131.6 for instances with 40–60 variables.  相似文献   

18.
The m-partial cover problem on a plane is defined as follows: given npoints located at cartesian coordinates (xj, yj) (j=1,…,n) each with an associated weight wj, a critical distance R and an integer number m, determine the location of m centres so that the sum of the weights of those points lying within distance R of at least one centre is maximised. Five heuristic procedures to solve the m-partial cover problem are presented and computational experience reported. The use of the procedures for some related problems is discussed.  相似文献   

19.
The set cover problem is that of computing a minimum weight subfamily F, given a family F of weighted subsets of a base set U, such that every element of U is covered by some subset in F. The k-set cover problem is a variant in which every subset is of size at most k. It has been long known that the problem can be approximated within a factor of by the greedy heuristic, but no better bound has been shown except for the case of unweighted subsets. In this paper we consider approximation of a restricted version of the weighted 3-set cover problem, as a first step towards better approximation of general k-set cover problem, where any two distinct subset costs differ by a multiplicative factor of at least 2. It will be shown, via LP duality, that an improved approximation bound of H(3)-1/6 can be attained, when the greedy heuristic is suitably modified for this case. A key to our algorithm design and analysis is the Gallai-Edmonds structure theorem for maximum matchings.  相似文献   

20.
We present Undercover, a primal heuristic for nonconvex mixed-integer nonlinear programs (MINLPs) that explores a mixed-integer linear subproblem (sub-MIP) of a given MINLP. We solve a vertex covering problem to identify a smallest set of variables to fix, a so-called cover, such that each constraint is linearized. Subsequently, these variables are fixed to values obtained from a reference point, e.g., an optimal solution of a linear relaxation. Each feasible solution of the sub-MIP corresponds to a feasible solution of the original problem. We apply domain propagation to try to avoid infeasibilities, and conflict analysis to learn additional constraints from infeasibilities that are nonetheless encountered. We present computational results on a test set of mixed-integer quadratically constrained programs (MIQCPs) and MINLPs. It turns out that the majority of these instances allows for small covers. Although general in nature, we show that the heuristic is most successful on MIQCPs. It nicely complements existing root-node heuristics in different state-of-the-art solvers and helps to significantly improve the overall performance of the MINLP solver SCIP.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号