首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 357 毫秒
1.
This paper presents two new heuristics for the flowshop scheduling problem with sequence-dependent setup times (SDSTs) and makespan minimization objective. The first is an extension of a procedure that has been very successful for the general flowshop scheduling problem. The other is a greedy randomized adaptive search procedure (GRASP) which is a technique that has achieved good results on a variety of combinatorial optimization problems. Both heuristics are compared to a previously proposed algorithm based on the traveling salesman problem (TSP). In addition, local search procedures are developed and adapted to each of the heuristics. A two-phase lower bounding scheme is presented as well. The first phase finds a lower bound based on the assignment relaxation for the asymmetric TSP. In phase two, attempts are made to improve the bound by inserting idle time. All procedures are compared for two different classes of randomly generated instances. In the first case where setup times are an order of magnitude smaller than the processing times, the new approaches prove superior to the TSP-based heuristic; for the case where both processing and setup times are identically distributed, the TSP-based heuristic outperforms the proposed procedures.  相似文献   

2.
The sectoring arc routing problem (SARP) is introduced to model activities associated with the streets of large urban areas, like municipal waste collection. The aim is to partition the street network into a given number of sectors and to build a set of vehicle trips in each sector, to minimize the total duration of the trips. Two two-phase heuristics and one best insertion method are proposed. In the two-phase methods, phase 1 constructs the sectors using two possible heuristics, while phase 2 solves a mixed capacitated arc routing problem (MCARP) to compute the trips in each sector. The best insertion method determines sectors and trips simultaneously. In addition to solution cost, some evaluation criteria such as imbalance, diameter and dispersion measures are used to compare algorithms. Numerical results on large instances with up to 401 nodes and 1056 links (arcs or edges) are reported and analysed.  相似文献   

3.
In a random graph on n vertices, the maximum clique is likely to be of size very close to 2 lg n. However, the clique produced by applying the naive “greedy” heuristic to a random graph is unlikely to have size much exceeding lg n. The factor of two separating these estimates motivates the search for more effective heuristics. This article analyzes a heuristic search strategy, the Metropolis process, which is just one step above the greedy one in its level of sophistication. It is shown that the Metropolis process takes super-polynomial time to locate a clique that is only slightly bigger than that produced by the greedy heuristic.  相似文献   

4.
In this paper, we present a parallel greedy randomized adaptive search procedure (GRASP) for the Steiner problem in graphs. GRASP is a two-phase metaheuristic. In the first phase, solutions are constructed using a greedy randomized procedure. Local search is applied in the second phase, leading to a local minimum with respect to a specified neighborhood. In the Steiner problem in graphs, feasible solutions can be characterized by their non-terminal nodes (Steiner nodes) or by their key-paths. According to this characterization, two GRASP procedures are described using different local search strategies. Both use an identical construction procedure. The first uses a node-based neighborhood for local search, while the second uses a path-based neighborhood. Computational results comparing the two procedures show that while the node-based variant produces better quality solutions, the path-based variant is about twice as fast. A hybrid GRASP procedure combining the two neighborhood search strategies is then proposed. Computational experiments with a parallel implementation of the hybrid procedure are reported, showing that the algorithm found optimal solutions for 45 out of 60 benchmark instances and was never off by more than 4% of the optimal solution value. The average speedup results observed for the test problems show that increasing the number of processors reduces elapsed times with increasing speedups. Moreover, the main contribution of the parallel algorithm concerns the fact that larger speedups of the same order of the number of processors are obtained exactly for the most difficult problems.  相似文献   

5.
6.
This paper introduces Empirically Adjusted Greedy Heuristics (EAGH), a procedure for designing greedy algorithms for a given combinatorial optimization problem and illustrates the way in which EAGH works with an application to minimize the makespan in the permutation flow-shop problem. The basic idea behind EAGH is that a greedy heuristic can be seen as a member of an infinite set of heuristics, this set being defined by a function that depends on several parameters. Each set of values of the parameters corresponds to a specific greedy heuristic. Then, the best element of the set, for a training set of instances of the problem, is found by applying a non-linear optimization algorithm to a function that measures the quality of the obtained solutions to the instances of the training set, and which depends on the parameters that characterize each specific algorithm. EAGH allows improving known heuristics or finding good new ones.  相似文献   

7.
A greedy clique decomposition of a graph is obtained by removing maximal cliques from a graph one by one until the graph is empty. We have recently shown that any greedy clique decomposition of a graph of ordern has at mostn 2/4 cliques. A greedy max-clique decomposition is a particular kind cf greedy clique decomposition where maximum cliques are removed, instead of just maximal ones. In this paper, we show that any greedy max-clique decompositionC of a graph of ordern has, wheren(C) is the number of vertices inC.  相似文献   

8.
In this paper, approximate solutions algorithms for discrete cost multicommodity network optimization problems are presented and compared. Firstly, extensions of classical greedy heuristics, based on link-rerouting and flow-rerouting heuristics, are presented in details. Secondly, a new approximate solution algorithm, which basically consists of a heuristic implementation of the exact Benders-type cutting plane generation method, is proposed. All these algorithms are extensively compared on randomly generated graphs up to 50 nodes and 90 links. It clearly appears that this new Benders-type approach is very promising since it produces the best heuristic solutions.  相似文献   

9.
The diameter-constrained minimum spanning tree problem is an NP-hard combinatorial optimization problem that seeks a minimum cost spanning tree with a limit D imposed upon the length of any path in the tree. We begin by presenting four constructive greedy heuristics and, secondly, we present some second-order heuristics, performing some improvements on feasible solutions, hopefully leading to better objective function values. We present a heuristic with an edge exchange mechanism, another that transforms a feasible spanning tree solution into a feasible diameter-constrained spanning tree solution, and finally another with a repetitive mechanism. Computational results show that repetitive heuristics can improve considerably over the results of the greedy constructive heuristics, but using a huge amount of computation time. To obtain computational results, we use instances of the problem corresponding to complete graphs with a number of nodes between 20 and 60 and with the value of D varying between 4 and 9.  相似文献   

10.
The allocation of fresh produce to shelf space represents a new decision support research area which is motivated by the desire of many retailers to improve their service due to the increasing demand for fresh food. However, automated decision making for fresh produce allocation is challenging because of the very short lifetime of fresh products. This paper considers a recently proposed practical model for the problem which is motivated by our collaboration with Tesco. Moreover, the paper investigates heuristic and meta-heuristic approaches as alternatives for the generalized reduced gradient algorithm, which becomes inefficient when the problem size becomes larger. A simpler single-item inventory problem is firstly studied and solved by a polynomial time bounded procedure. Several dynamic greedy heuristics are then developed for the multi-item problem based on the procedure for the single-item inventory problem. Experimental results show that these greedy heuristics are much more efficient and provide competitive results when compared to those of a multi-start generalized reduced gradient algorithm. In order to further improve the solution, we investigated simulated annealing, a greedy randomized adaptive search procedure and three types of hyper-heuristics. Their performance is tested and compared on a set of problem instances which are made publicly available for the research community.  相似文献   

11.
Determining the maximum outerplanar subgraph of a given graph is known to be an NP-complete problem. In the literature there are no earlier experiment on approximating the maximum outerplanar subgraph problem. In this paper we compare solution quality and running times of different heuristics for finding maximum outerplanar subgraphs. We compare a greedy heuristic against a triangular cactus heuristic and its greedy variation. We also use the solutions from the greedy heuristics as initial solutions for a simulated annealing algorithm.The main experimental result is that simulated annealing with initial solution taken from the greedy triangular cactus heuristic yields the best known approximations for the maximum outerplanar subgraph problem.Work funded by the Tampere Graduate School in Information Science and Engineering (TISE) and supported by the Academy of Finland (Project 51528).  相似文献   

12.
The advent of desktop multi-core computers has dramatically improved the usability of parallel algorithms which, in the past, have required specialised hardware. This paper introduces cooperating local search (CLS), a parallelised hyper-heuristic for the maximum clique problem. CLS utilises cooperating low level heuristics which alternate between sequences of iterative improvement, during which suitable vertices are added to the current clique, and plateau search, where vertices of the current clique are swapped with vertices not in the current clique. These low level heuristics differ primarily in their vertex selection techniques and their approach to dealing with plateaus. To improve the performance of CLS, guidance information is passed between low level heuristics directing them to particular areas of the search domain. In addition, CLS dynamically reconfigures the allocation of low level heuristics to cores, based on information obtained during a trial, to ensure that the mix of low level heuristics is appropriate for the instance being optimised. CLS has no problem instance dependent parameters, improves the state-of-the-art performance for the maximum clique problem over all the BHOSLIB benchmark instances and attains unprecedented consistency over the state-of-the-art on the DIMACS benchmark instances.  相似文献   

13.
The maximum clique problem   总被引:2,自引:0,他引:2  
In this paper we present a survey of results concerning algorithms, complexity, and applications of the maximum clique problem. We discuss enumerative and exact algorithms, heuristics, and a variety of other proposed methods. An up to date bibliography on the maximum clique and related problems is also provided.  相似文献   

14.
The two-echelon location-routing problem (LRP-2E) arises from recent transportation applications like city logistics. In this problem, still seldom studied, first-level trips serve from a main depot a set of satellite depots, which must be located, while second-level trips visit customers from these satellites. After a literature review on the LRP-2E, we present four constructive heuristics and a hybrid metaheuristic: A greedy randomized adaptive search procedure (GRASP) complemented by a learning process (LP) and path relinking (PR). The GRASP and learning process involve three greedy randomized heuristics to generate trial solutions and two variable neighbourhood descent (VND) procedures to improve them. The optional path relinking adds a memory mechanism by combining intensification strategy and post-optimization. Numerical tests show that the GRASP with LP and PR outperforms the simple heuristics and an adaptation of a matheuristic initially published for a particular case, the capacitated location-routing problem (CLRP). Additional tests on the CLRP indicate that the best GRASP competes with the best metaheuristics published.  相似文献   

15.
The Max-Cut problem is a classical NP-hard problem where the objective is to partition the nodes of an edge-weighted graph in a way that maximizes the sum of edges between the parts. We present a greedy heuristic for solving Max-Cut that combines an Edge-Contraction heuristic with the Differencing Method. We compare the heuristic’s performance to other greedy heuristics using a large and diverse set of problem instances.  相似文献   

16.
In the Capacitated Clustering Problem (CCP), a given set of n weighted points is to be partitioned into p clusters such that, the total weight of the points in each cluster does not exceed a given cluster capacity. The objective is to find a set of p centers that minimises total scatter of points allocated to them. In this paper a new constructive method, a general framework to improve the performance of greedy constructive heuristics, and a problem space search procedure for the CCP are proposed. The constructive heuristic finds patterns of natural subgrouping in the input data using concept of density of points. Elements of adaptive computation and periodic construction–deconstruction concepts are implemented within the constructive heuristic to develop a general framework for building efficient heuristics. The problem-space search procedure is based on perturbations of input data for which a controlled perturbation strategy, intensification and diversification strategies are developed. The implemented algorithms are compared with existing methods on a standard set of bench-marks and on new sets of large-sized instances. The results illustrate the strengths of our algorithms in terms of solution quality and computational efficiency.  相似文献   

17.
In the discretep-hub location problem, various nodes interact with each other by sending and receiving given levels of traffic (such as telecommunications traffic, data transmissions, airline passengers, packages, etc.). It is necessary to choosep of the given nodes to act as hubs, which are fully interconnected; it is also necessary to connect each other node to one of these hubs so that traffic can be sent between any pair of nodes by using the hubs as switching points. The objective is to minimize the sum of the costs for sending traffic along the links connecting the various nodes. Like many combinatorial problems, thep-hub location problem has many local optima. Heuristics, such as exchange methods, can terminate once such a local optimum is encountered. In this paper, we describe new heuristics for thep-hub location problem, based on tabu search and on a greedy randomized adaptive search procedure (GRASP). These recently developed approaches to combinatorial optimization are capable of examining several local optima, so that, overall, superior solutions are found. Computational experience is reported in which both tabu search and GRASP found optimal hub locations (subject to the assumption that nodes must be assigned to the nearest hub) in over 90% of test problems. For problems for which such optima are not known, tabu search and GRASP generated new best-known solutions.  相似文献   

18.
Greedy Randomized Adaptive Search Procedures   总被引:24,自引:0,他引:24  
Today, a variety of heuristic approaches are available to the operations research practitioner. One methodology that has a strong intuitive appeal, a prominent empirical track record, and is trivial to efficiently implement on parallel processors is GRASP (Greedy Randomized Adaptive Search Procedures). GRASP is an iterative randomized sampling technique in which each iteration provides a solution to the problem at hand. The incumbent solution over all GRASP iterations is kept as the final result. There are two phases within each GRASP iteration: the first intelligently constructs an initial solution via an adaptive randomized greedy function; the second applies a local search procedure to the constructed solution in hope of finding an improvement. In this paper, we define the various components comprising a GRASP and demonstrate, step by step, how to develop such heuristics for combinatorial optimization problems. Intuitive justifications for the observed empirical behavior of the methodology are discussed. The paper concludes with a brief literature review of GRASP implementations and mentions two industrial applications.  相似文献   

19.
A study of ACO capabilities for solving the maximum clique problem   总被引:4,自引:0,他引:4  
This paper investigates the capabilities of the Ant Colony Optimization (ACO) meta-heuristic for solving the maximum clique problem, the goal of which is to find a largest set of pairwise adjacent vertices in a graph. We propose and compare two different instantiations of a generic ACO algorithm for this problem. Basically, the generic ACO algorithm successively generates maximal cliques through the repeated addition of vertices into partial cliques, and uses “pheromone trails” as a greedy heuristic to choose, at each step, the next vertex to enter the clique. The two instantiations differ in the way pheromone trails are laid and exploited, i.e., on edges or on vertices of the graph. We illustrate the behavior of the two ACO instantiations on a representative benchmark instance and we study the impact of pheromone on the solution process. We consider two measures—the re-sampling and the dispersion ratio—for providing an insight into the performance at run time. We also study the benefit of integrating a local search procedure within the proposed ACO algorithm, and we show that this improves the solution process. Finally, we compare ACO performance with that of three other representative heuristic approaches, showing that the former obtains competitive results.  相似文献   

20.
A greedy clique decomposition of a graph is obtained by removing maximal cliques from a graph one by one until the graph is empty. It has recently been shown that any greedy clique decomposition of a graph of ordern has at mostn 2/4 cliques. In this paper, we extend this result by showing that for any positive integerp, 3≤p any clique decomposisitioof a graph of ordern obtained by removing maximal cliques of order at leastp one by one until none remain, in which case the remaining edges are removed one by one, has at mostt p-1( n ) cliques. Heret p-1( n ) is the number of edges in the Turán graph of ordern, which has no complete subgraphs of orderp. In connection with greedy clique decompositions, P. Winkler conjectured that for any greedy clique decompositionC of a graphG of ordern the sum over the number of vertices in each clique ofC is at mostn 2/2. We prove this conjecture forK 4-free graphs and show that in the case of equality forC andG there are only two possibilities:
  1. G?K n/2,n/2
  2. G is complete 3-partite, where each part hasn/3 vertices.
We show that in either caseC is completely determined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号