首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider the multilevel paradigm and its potential to aid the solution of combinatorial optimisation problems. The multilevel paradigm is a simple one, which involves recursive coarsening to create a hierarchy of approximations to the original problem. An initial solution is found (sometimes for the original problem, sometimes the coarsest) and then iteratively refined at each level. As a general solution strategy, the multilevel paradigm has been in use for many years and has been applied to many problem areas (most notably in the form of multigrid techniques). However, with the exception of the graph partitioning problem, multilevel techniques have not been widely applied to combinatorial optimisation problems. In this paper we address the issue of multilevel refinement for such problems and, with the aid of examples and results in graph partitioning, graph colouring and the travelling salesman problem, make a case for its use as a metaheuristic. The results provide compelling evidence that, although the multilevel framework cannot be considered as a panacea for combinatorial problems, it can provide an extremely useful addition to the combinatorial optimisation toolkit. We also give a possible explanation for the underlying process and extract some generic guidelines for its future use on other combinatorial problems.  相似文献   

2.
Constraint Programming (CP) has been successful in a number of combinatorial search and discrete optimisation problems. Yet other more traditional approaches, such as Integer Programming (IP), can still give a better performance on the same problem types. Central to IP's success is its reliance on a fast Linear Programming (LP) solver providing solutions during the search to the corresponding relaxed problems. These solutions are used to guide the search within IP as well as a means of detecting infeasibility and integrality. This paper shows that there is scope also to include LP within the CP framework, in order to similarly guide the CP search. The problems examined here are one for which CP on its own had proved markedly inferior to IP. Hence a hybrid solver based on the CP search and using an LP solver is configured and run on these problems. The outcome shows that using the LP solver within the CP search is a valuable addition to the available search strategies. An improved performance over the CP-only strategies is obtained and, further, comparable results are obtained to those from IP. Overall, CP+LP can be considered as a more robust approach than either CP or IP on their own on a variety of combinatorial search problems.  相似文献   

3.
DNA codes are sets of words of fixed length n over the alphabet {A,C,G,T} which satisfy a number of combinatorial conditions. They have application in DNA computing, in DNA microarray technologies and as molecular bar codes. The combinatorial conditions considered are (i) minimum Hamming distance d, (ii) fixed GC content and, in some cases (iii) minimum distance d between any codeword and the reverse Watson-Crick complement of any codeword. The problem is to find DNA codes with the maximum number of codewords. In this paper the construction of DNA codes is studied from an algorithmic perspective. Four local search algorithms are developed and combined in a variable neighbourhood search framework. The algorithm has been run extensively. Over 254 problems considered, it was able to match or improve the best known lower bounds in 180 cases, with 52 new bests.   相似文献   

4.
In this paper we tackle an important point of combinatorial optimisation: that of complexity theory when dealing with the counting or enumeration of optimal solutions. Complexity theory has been initially designed for decision problems and evolved over the years, for instance, to tackle particular features in optimisation problems. It has also evolved, more or less recently, towards the complexity of counting and enumeration problems and several complexity classes, which we review in this paper, have emerged in the literature. This kind of problems makes sense, notably, in the case of multicriteria optimisation where the aim is often to enumerate the set of the so-called Pareto optima. In the second part of this paper we review the complexity of multicriteria scheduling problems in the light of the previous complexity results. This paper appeared in 4OR 3(1), 1–21, 2005.  相似文献   

5.
Since their beginning in constraint programming, set solvers have been applied to a wide range of combinatorial search problems, such as bin-packing, set partitioning, circuit and combinatorial design. In this paper we present and evaluate a new means towards improving the practical reasoning power of Finite Set (FS) constraint solvers to better address such set problems with a particular attention to the challenging symmetrical set problems often cast as Combinatorial Design Problems (CDPs). While CDPs find a natural formulation in the language of sets, in constraint programming literature, alternative models are often used due to a lack of efficiency of traditional FS solvers. We first identify the main structural components of CDPs that render their modeling suitable to set languages but their solving a technical challenge. Our new prototype solver extends the traditional subset variable domain with lexicographic bounds that better approximate a set domain by satisfying the cardinality restrictions applied to the variable, and allow for active symmetry breaking using ordering constraints. Our contribution includes the formal and practical framework of the new solver implemented on top of a traditional set solver, and an empirical evaluation on benchmark CDPs.  相似文献   

6.
Many problems in image restoration can be formulated as either an unconstrained non‐linear minimization problem, usually with a Tikhonov‐like regularization, where the regularization parameter has to be determined; or as a fully constrained problem, where an estimate of the noise level, either the variance or the signal‐to‐noise ratio, is available. The formulations are mathematically equivalent. However, in practice, it is much easier to develop algorithms for the unconstrained problem, and not always obvious how to adapt such methods to solve the corresponding constrained problem. In this paper, we present a new method which can make use of any existing convergent method for the unconstrained problem to solve the constrained one. The new method is based on a Newton iteration applied to an extended system of non‐linear equations, which couples the constraint and the regularized problem, but it does not require knowledge of the Jacobian of the irregularity functional. The existing solver is only used as a black box solver, which for a fixed regularization parameter returns an improved solution to the unconstrained minimization problem given an initial guess. The new modular solver enables us to easily solve the constrained image restoration problem; the solver automatically identifies the regularization parameter, during the iterative solution process. We present some numerical results. The results indicate that even in the worst case the constrained solver requires only about twice as much work as the unconstrained one, and in some instances the constrained solver can be even faster. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper we tackle an important point of combinatorial optimisation: that of complexity theory when dealing with the counting or enumeration of optimal solutions. Complexity theory has been initially designed for decision problems and evolved over the years, for instance, to tackle particular features in optimisation problems. It has also evolved, more or less recently, towards the complexity of counting and enumeration problems and several complexity classes, which we review in this paper, have emerged in the literature. This kind of problems makes sense, notably, in the case of multicriteria optimisation where the aim is often to enumerate the set of the so-called Pareto optima. In the second part of this paper we review the complexity of multicriteria scheduling problems in the light of the previous complexity results.Received: November 2004 / Received version: March 2005MSC classification: 90B40, 90C29, 68Q15  相似文献   

8.
This paper introduces a new approach to applying hyper-heuristic algorithms to solve combinatorial problems with less effort, taking into account the modelling and algorithm construction process. We propose a unified encoding of a solution and a set of low level heuristics which are domain-independent and which change the solution itself. This approach enables us to address NP-hard problems and generate good approximate solutions in a reasonable time without a large amount of additional work required to tailor search methodologies for the problem in hand. In particular, we focused on solving DNA sequencing by hybrydization with errors, which is known to be strongly NP-hard. The approach was extensively tested by solving multiple instances of well-known combinatorial problems and compared with results generated by meta heuristics that have been tailored for specific problem domains.  相似文献   

9.
The p-median problem has been widely studied in combinatorial optimisation, but its generalisation to the capacitated case has not. We propose a branch and price algorithm, comparing it with a standard MIP solver and a branch and bound algorithm based on Lagrangean relaxation. We present computational experience, using test instances drawn from the literature and new instances with higher ratio between the number of medians p and the number of nodes N. The branch and price algorithm shows very good performances and computational time robustness in solving problems for any ratio.Received: December 2002, Revised: August 2003AMS classification: 90C10, 90C27  相似文献   

10.
When local search methods are applied to combinatorial optimisation problems it is the characteristics of the solution surface that determine the effectiveness of the method. This paper aims to advance our understanding of solution surface characteristics. The focus is on the basin of attraction associated with each local minimum; that is the set of solutions from which a particular local minimum is reached by following downhill local search. A Markov chain model is proposed for the behaviour of the function values occurring in a random walk on the solution surface. The probability transition matrix can be estimated and this is used to estimate both the shape and the size of the basins of attraction. In order to test this approach a study is made of the problem of minimising weighted flowtime on unrelated parallel machines.  相似文献   

11.
Component deployment is a combinatorial optimisation problem in software engineering that aims at finding the best allocation of software components to hardware resources in order to optimise quality attributes, such as reliability. The problem is often constrained because of the limited hardware resources, and the communication network, which may connect only certain resources. Owing to the non-linear nature of the reliability function, current optimisation methods have focused mainly on heuristic or metaheuristic algorithms. These are approximate methods, which find near-optimal solutions in a reasonable amount of time. In this paper, we present a mixed integer linear programming (MILP) formulation of the component deployment problem. We design a set of experiments where we compare the MILP solver to methods previously used to solve this problem. Results show that the MILP solver is efficient in finding feasible solutions even where other methods fail, or prove infeasibility where feasible solutions do not exist.  相似文献   

12.
In this paper, a greedy randomised heuristic is applied to a complex vehicle-scheduling problem with tight time windows and additional constraints. Two forms of adaptive search are identified, which are referred to as local and global adaptation. In both cases, the calculation of the greedy function is modified by an amount which measures heuristically the quality of the partial solution that is obtained when a decision is made. One use of global adaptation is an approach which is referred to as a learning strategy since it involves an attempt to learn from previous mistakes by an appropriate updating of the greedy function from one run of the heuristic to the next. Such a learning strategy forms the main focus of this paper. Experimental results show that it is potentially a powerful heuristic device, since it greatly enhanced the effectiveness of those methods that had previously been applied to this problem; that is, a greedy randomized heuristic which also incorporated a look-ahead strategy and a version of the well-known savings method. It is suggested that learning strategies of the general type introduced in this paper have potential for application to other combinatorial optimisation problems.  相似文献   

13.
In the last few years, a significant number of multi-objective metaheuristics have been proposed in the literature in order to address real-world problems. Local search methods play a major role in many of these metaheuristic procedures. In this paper, we adapt a recent and popular indicator-based selection method proposed by Zitzler and Künzli in 2004, in order to define a population-based multi-objective local search. The proposed algorithm is designed in order to be easily adaptable, parameter independent and to have a high convergence rate. In order to evaluate the capacity of our algorithm to reach these goals, a large part of the paper is dedicated to experiments. Three combinatorial optimisation problems are tested: a flow shop problem, a ring star problem and a nurse scheduling problem. The experiments show that our algorithm can be applied with success to different types of multi-objective optimisation problems and that it outperforms some classical metaheuristics. Furthermore, the parameter sensitivity analysis enables us to provide some useful guidelines about how to set the parameters.  相似文献   

14.
In the search for better optimisation techniques, new methods that mix artificial intelligence and operations research have emerged. Search heuristics are integrated with optimisation algorithms. Approximation methods, like Hill Climbing, Simulated Annealing, and Tabu Search, that have been used with success in combinatorial optimisation problems, are one of such research lines. This paper presents the key elements of approximation methods and combines them in a tool appropriate for solving sequencing and resource allocation problems. The system permits a clear division between problem specification and problem solving, allowing a declarative representation and therefore minimising developing costs. The key issues discussed in this work are a model for representing this class of problems in a standard form, a set of strategies for applying the approximation methodology, and an expert system that dynamically manipulates the strategies' parameters.  相似文献   

15.
The bin packing problem (and its variant, the cutting stock problem) is among the most intensively studied combinatorial optimization problems. We present a library of computer codes, benchmark instances, and pointers to relevant articles for these two problems. The library is available at http://or.dei.unibo.it/library/bpplib. The computer code section includes twelve programs: seven are directly downloadable from the library page, while for the remaining five we provide addresses where they can be obtained or downloaded. Some of the codes for which we provide an original C++ implementation need an integer linear programming solver. For such cases, the library provides two versions: one that uses the commercial solver CPLEX, and one that uses the freeware solver SCIP. The benchmark section provides over six thousands instances (partly coming from the literature and partly randomly generated), together with the corresponding solutions. Instances that are difficult to solve to proven optimality are included. The library also includes a BibTeX file of more than 150 references on this topic and an interactive visual tool to manually solve bin packing and cutting stock instances. We conclude this work by reporting the results of new computational experiments on a number of computer codes and benchmark instances.  相似文献   

16.
While research in robust optimization has attracted considerable interest over the last decades, its algorithmic development has been hindered by several factors. One of them is a missing set of benchmark instances that make algorithm performance better comparable, and makes reproducing instances unnecessary. Such a benchmark set should contain hard instances in particular, but so far, the standard approach to produce instances has been to sample values randomly from a uniform distribution.In this paper we introduce a new method to produce hard instances for min-max combinatorial optimization problems, which is based on an optimization model itself. Our approach does not make any assumptions on the problem structure and can thus be applied to any combinatorial problem. Using the Selection and Traveling Salesman problems as examples, we show that it is possible to produce instances which are up to 500 times harder to solve for a mixed-integer programming solver than the current state-of-the-art instances.  相似文献   

17.
The travelling salesman problem (TSP)   is one of the most prominent NP-hard combinatorial optimisation problems. After over fifty years of intense study, the TSP continues to be of broad theoretical and practical interest. Using a novel approach to empirical scaling analysis, which in principle is applicable to solvers for many other problems, we demonstrate that some of the most widely studied types of TSP instances tend to be much easier than expected from previous theoretical and empirical results. In particular, we show that the empirical median run-time required for finding optimal solutions to so-called random uniform Euclidean (RUE) instances – one of the most widely studied classes of TSP instances – scales substantially better than Θ(2n)Θ(2n) with the number n of cities to be visited. The Concorde solver, for which we achieved this result, is the best-performing exact TSP solver we are aware of, and has been applied to a broad range of real-world problems. Furthermore, we show that even when applied to a broad range of instances from the prominent TSPLIB benchmark collection for the TSP, Concorde exhibits run-times that are surprisingly consistent with our empirical model of Concorde’s scaling behaviour on RUE instances. This result suggests that the behaviour observed for the simple random structure underlying RUE is very similar to that obtained on the structured instances arising in various applications.  相似文献   

18.
The multidimensional knapsack problem (MKP) is a classic problem in combinatorial optimisation. Several authors have proposed to use surrogate relaxation to compute upper bounds for the MKP. In their papers, the surrogate dual is solved heuristically. We show that, using a modern dual simplex solver as a subroutine, one can solve the surrogate dual exactly in reasonable computing times. On the other hand, the resulting upper bound tends to be strong only for relatively small MKP instances.  相似文献   

19.
Risk engineering models for decision making under uncertainty are illustrated by a succession of increasingly complex example applications. The approach is based upon simple arithmetic procedures which are easy to understand and use. Offshore North Sea projects are the basis for the example applications, but the approach has been used in a range of problem environments in the U.K., Canada and U.S.A. The emphasis is a flexible structure for viewing problems which allows modelling at a level of detail and with a degree of quantification tailored to the needs of the context. The concern is gaining useful insight, not model solution optimisation.  相似文献   

20.
Adaptive large neighborhood search (ALNS) is a useful framework for solving difficult combinatorial optimisation problems. As a metaheuristic, it consists of some components that must be tailored to the specific optimisation problem that is being solved, while other components are problem independent. The literature is sparse with respect to studies that aim to evaluate the relative merit of different alternatives for specific problem independent components. This paper investigates one such component, the move acceptance criterion in ALNS, and compares a range of alternatives. Through extensive computational testing, the alternative move acceptance criteria are ranked in three groups, depending on the performance of the resulting ALNS implementations. Among the best variants, we find versions of criteria based on simulated annealing, threshold acceptance, and record-to-record travel, with a version of the latter being consistently undominated by the others. Additional analyses focus on the search behavior, and multiple linear regression is used to identify characteristics of search behavior that are associated with good search performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号