首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 904 毫秒
1.
The set cover problem is that of computing a minimum weight subfamily F, given a family F of weighted subsets of a base set U, such that every element of U is covered by some subset in F. The k-set cover problem is a variant in which every subset is of size at most k. It has been long known that the problem can be approximated within a factor of by the greedy heuristic, but no better bound has been shown except for the case of unweighted subsets. In this paper we consider approximation of a restricted version of the weighted 3-set cover problem, as a first step towards better approximation of general k-set cover problem, where any two distinct subset costs differ by a multiplicative factor of at least 2. It will be shown, via LP duality, that an improved approximation bound of H(3)-1/6 can be attained, when the greedy heuristic is suitably modified for this case. A key to our algorithm design and analysis is the Gallai-Edmonds structure theorem for maximum matchings.  相似文献   

2.
In this paper we analyze several approaches to the Maximum Independent Set (MIS) problem in hypergraphs with degree bounded by a parameter Δ. Since independent sets in hypergraphs can be strong and weak, we denote by MIS (MSIS) the problem of finding a maximum weak (strong) independent set in hypergraphs, respectively. We propose a general technique that reduces the worst case analysis of certain algorithms on hypergraphs to their analysis on ordinary graphs. This technique allows us to show that the greedy algorithm for MIS that corresponds to the classical greedy set cover algorithm has a performance ratio of (Δ+1)/2. It also allows us to apply results on local search algorithms on graphs to obtain a (Δ+1)/2 approximation for the weighted MIS and (Δ+3)/5−? approximation for the unweighted case. We improve the bound in the weighted case to ⌈(Δ+1)/3⌉ using a simple partitioning algorithm. We also consider another natural greedy algorithm for MIS that adds vertices of minimum degree and achieves only a ratio of Δ−1, significantly worse than on ordinary graphs. For MSIS, we give two variations of the basic greedy algorithm and describe a family of hypergraphs where both algorithms approach the bound of Δ.  相似文献   

3.
In the test cover problem a set of m items is given together with a collection of subsets, called tests. A smallest subcollection of tests is to be selected such that for each pair of items there is a test in the selection that contains exactly one of the two items. It is known that the problem is NP-hard and that the greedy algorithm has a performance ratio O(log m). We observe that, unless P=NP, no polynomial-time algorithm can do essentially better. For the case that each test contains at most k items, we give an O(log k)-approximation algorithm. We pay special attention to the case that each test contains at most two items. A strong relation with a problem of packing paths in a graph is established, which implies that even this special case is NP-hard. We prove APX-hardness of both problems, derive performance guarantees for greedy algorithms, and discuss the performance of a series of local improvement heuristics. Partially supported by the Future and Emerging Technologies Programme of the EU under contract number IST-1999-14186 (ALCOM-FT).Partially supported by a Merck Computational Biology and Chemistry Program Graduate Fellowship from the Merck Company Foundation.Also Iceland Genomics CorporationPartially supported by subcontract No. 16082-RFP-00-2C in the area of ``Combinatorial Optimization in Biology (XAXE),' Los Alamos National Laboratories, and NSF grant CCR-0105548.Mathematics Subject Classification: 90B27  相似文献   

4.
The notion of recoverable value was advocated in the work of Feige, Immorlica, Mirrokni and Nazerzadeh (APPROX 2009) as a measure of quality for approximation algorithms. There, this concept was applied to facility location problems. In the current work we apply a similar framework to the maximum independent set problem (MIS). We say that an approximation algorithm has recoverable factor ρ, if for every graph it recovers an independent set of size at least where d(v) is the degree of vertex v, and I ranges over all independent sets in G. Hence, in a sense, from every vertex v in the maximum independent set the algorithm recovers a value of at least toward the solution. This quality measure is most effective in graphs in which the maximum independent set is composed of low degree vertices. A simple greedy algorithm achieves . We design a new randomized algorithm for MIS that ensures an expected recoverable factor of at least . In passing, we prove that approximating MIS in graphs with a given k‐coloring within a ratio larger than 2/ k is unique‐games hard. This rules out an alternative approach for obtaining . © 2013 Wiley Periodicals, Inc. Random Struct. Alg., 46, 142–159, 2015  相似文献   

5.
The maximum weight k-independent set problem has applications in many practical problems like k-machines job scheduling problem, k-colourable subgraph problem, VLSI design layout and routing problem. Based on DAG (Directed Acyclic Graph) approach, an O(kn 2) time sequential algorithm is designed in this paper to solve the maximum weight k-independent set problem on weighted trapezoid graphs. The weights considered here are all non-negative and associated with each of the n vertices of the graph.  相似文献   

6.
We consider the polynomial approximation behavior of the problem of finding, in a graph with weighted vertices, a maximal independent set minimizing the sum of the weights. In the spirit of a work of Halldórson dealing with the unweighted case, we extend it and perform approximation hardness results by using a reduction from the minimum coloring problem. In particular, a consequence of our main result is that there does not exist any polynomial time algorithm approximating this problem within a ratio independent of the weights, unless P = NP. We bring also to the fore a very simple ratio guaranteed by every algorithm while no polynomial time algorithm can guarantee the ratio (1 – ). The known hardness results for the unweighted case can be deduced. We finally discuss approximation results for both weighted and unweighted cases: we perform an approximation ratio that is valid for any algorithm for the former and propose an analysis of a greedy algorithm for the latter.  相似文献   

7.
This paper considers the following problem: given an edgeweighted graphG = (V, E, w) and disjointk-subsetsU p ofV, find a minimum weighted set of edgesE′ ?E such that its removal disconnects the graph intok parts and each part contains exactly one vertex from eachU p for 1 ≤pr. This generalizes some well-known NP-hard problems. In this paper, we first apply greedy local search algorithm to obtain better approximation solutions. Then we give a randomized local search algorithm which produces a solution within a factor (1 + ε) with the probability at least (1 - 1/e) for any small ε. Simple near-optimum approximation algorithms are also proposed. Analogously, there is a maximumk-multiway cut problem with the same restrictions.  相似文献   

8.
The min-Shift Design problem (MSD) is an important scheduling problem that needs to be solved in many industrial contexts. The issue is to find a minimum number of shifts and the number of employees to be assigned to these shifts in order to minimize the deviation from workforce requirements. Our research considers both theoretical and practical aspects of the min-Shift Design problem. This problem is closely related to the minimum edge-cost flow problem (MECF), a network flow variant that has many applications beyond shift scheduling. We show that MSD reduces to a special case of MECF and, exploiting this reduction, we prove a logarithmic hardness of approximation lower bound for MSD. On the basis of these results, we propose a hybrid heuristic for the problem, which relies on a greedy heuristic followed by a local search algorithm. The greedy part is based on the network flow analogy, and the local search algorithm makes use of multiple neighborhood relations. An experimental analysis on structured random instances shows that the hybrid heuristic clearly outperforms our previous commercial implementation. Furthermore, it highlights the respective merits of the composing heuristics for different performance parameters.  相似文献   

9.
10.
Improved algorithms for the multicut and multiflow problems in rooted trees   总被引:1,自引:1,他引:0  
A. Tamir 《TOP》2008,16(1):114-125
Costa et al. (Oper. Res. Lett. 31:21–27, 2003) presented a quadratic O(min (Kn,n 2)) greedy algorithm to solve the integer multicut and multiflow problems in a rooted tree. (n is the number of nodes of the tree, and K is the number of commodities). Their algorithm is a special case of the greedy type algorithm of Kolen (Location problems on trees and in the rectilinear plane. Ph.D. dissertation, 1982) to solve weighted covering and packing problems defined by general totally balanced (greedy) matrices. In this communication we improve the complexity bound in Costa et al. (Oper. Res. Lett. 31:21–27, 2003) and show that in the case of the integer multicut and multiflow problems in a rooted tree the greedy algorithm of Kolen can be implemented in subquadratic O(K+n+min (K,n)log n) time. The improvement is obtained by identifying additional properties of this model which lead to a subquadratic transformation to greedy form and using more sophisticated data structures.   相似文献   

11.
A class of ordered sets is investigated for which the setup minimization problem gives rise to matroid independence systems. This class includes in particular all N-free ordered sets and all M-free ordered sets of height at most one. The matroid structure can be recognized efficiently; whence the matroid greedy algorithm may be used to solve the weighted minimization problem.Supported by Sonderforschungsbereich 303 (DFG), Institut für Ökonometrie und Operations Research, Univesität Bonn, W. Germany.  相似文献   

12.
In this paper, we approach the quality of a greedy algorithm for the maximum weighted clique problem from the viewpoint of matroid theory. More precisely, we consider the clique complex of a graph (the collection of all cliques of the graph) which is also called a flag complex, and investigate the minimum number k such that the clique complex of a given graph can be represented as the intersection of k matroids. This number k can be regarded as a measure of “how complex a graph is with respect to the maximum weighted clique problem” since a greedy algorithm is a k-approximation algorithm for this problem. For any k>0, we characterize graphs whose clique complexes can be represented as the intersection of k matroids. As a consequence, we can see that the class of clique complexes is the same as the class of the intersections of partition matroids. Moreover, we determine how many matroids are necessary and sufficient for the representation of all graphs with n vertices. This number turns out to be n-1. Other related investigations are also given.  相似文献   

13.
A (v,k,t)-covering design is a collection of k-subsets (called blocks) of a v-set V{\mathcal{V}} such that every t-subset of V{\mathcal{V}} is contained in at least one block. Given v, k and t, the goal of the covering design problem is to find a covering made of a minimum number of blocks. In this paper, we present a new tabu algorithm for tackling this problem. Our algorithm exploits a new implementation designed in order to evaluate efficiently the performance of the neighbors of the current configuration. The new implementation is much less space-consuming than the currently used technique, making it possible to tackle much larger problem instances. It is also significantly faster. Thanks to these improved data structures, our tabu algorithm was able to improve the upper bound of more than 50 problem instances.  相似文献   

14.
We propose a quasi-greedy algorithm for approximating the classical uncapacitated 2-level facility location problem (2-LFLP). Our algorithm, unlike the standard greedy algorithm, selects a sub-optimal candidate at each step. It also relates the minimization 2-LFLP problem, in an interesting way, to the maximization version of the single level facility location problem. Another feature of our algorithm is that it combines the technique of randomized rounding with that of dual fitting. This new approach enables us to approximate the metric 2-LFLP in polynomial time with a ratio of 1.77, a significant improvement on the previously known approximation ratios. Moreover, our approach results in a local improvement procedure for the 2-LFLP, which is useful in improving the approximation guarantees for several other multi-level facility location problems. An additional result of our approach is an O(ln (n))-approximation algorithm for the non-metric 2-LFLP, where n is the number of clients. This is the first non-trivial approximation for a non-metric multi-level facility location problem. An extended abstract of this paper appeared in the Proceedings of the 15th ACM-SIAM Symposium on Discrete Algorithms (SODA), January 2004.  相似文献   

15.
Faigle  Ulrich  Kern  Walter 《Order》2000,17(4):353-375
An algebraic model generalizing submodular polytopes is presented, where modular functions on partially ordered sets take over the role of vectors in R n . This model unifies various generalizations of combinatorial models in which the greedy algorithm and the Monge algorithm are successful and generalizations of the notions of core and Weber set in cooperative game theory.As a further application, we show that an earlier model of ours as well as the algorithmic model of Queyranne, Spieksma and Tardella for the Monge algorithm can be treated within the framework of usual matroid theory (on unordered ground-sets), which permits also the efficient algorithmic solution of the intersection problem within this model.  相似文献   

16.
Polynomial-time approximation schemes for packing and piercing fat objects   总被引:1,自引:0,他引:1  
We consider two problems: given a collection of n fat objects in a fixed dimension, (1) ( packing) find the maximum subcollection of pairwise disjoint objects, and (2) ( piercing) find the minimum point set that intersects every object. Recently, Erlebach, Jansen, and Seidel gave a polynomial-time approximation scheme (PTAS) for the packing problem, based on a shifted hierarchical subdivision method. Using shifted quadtrees, we describe a similar algorithm for packing but with a smaller time bound. Erlebach et al.'s algorithm requires polynomial space. We describe a different algorithm, based on geometric separators, that requires only linear space. This algorithm can also be applied to piercing, yielding the first PTAS for that problem.  相似文献   

17.
We study the problem of maximizing constrained non-monotone submodular functions and provide approximation algorithms that improve existing algorithms in terms of either the approximation factor or simplicity. Different constraints that we study are exact cardinality and multiple knapsack constraints for which we achieve (0.25−?)-factor algorithms.We also show, as our main contribution, how to use the continuous greedy process for non-monotone functions and, as a result, obtain a 0.13-factor approximation algorithm for maximization over any solvable down-monotone polytope.  相似文献   

18.
LetG=(V, E) be a graph andTV be a node set. We call an edge setS a Steiner tree forT ifS connects all pairs of nodes inT. In this paper we address the following problem, which we call the weighted Steiner tree packing problem. Given a graphG=(V, E) with edge weightsw e , edge capacitiesc e ,eE, and node setT 1,…,T N , find edge setsS 1,…,S N such that eachS k is a Steiner tree forT k , at mostc e of these edge sets use edgee for eacheE, and the sum of the weights of the edge sets is minimal. Our motivation for studying this problem arises from a routing problem in VLSI-design, where given sets of points have to be connected by wires. We consider the Steiner tree packing problem from a polyhedral point of view and define an associated polyhedron, called the Steiner tree packing polyhedron. The goal of this paper is to (partially) describe this polyhedron by means of inequalities. It turns out that, under mild assumptions, each inequality that defines a facet for the (single) Steiner tree polyhedron can be lifted to a facet-defining inequality for the Steiner tree packing polyhedron. The main emphasis of this paper lies on the presentation of so-called joint inequalities that are valid and facet-defining for this polyhedron. Inequalities of this kind involve at least two Steiner trees. The classes of inequalities we have found form the basis of a branch & cut algorithm. This algorithm is described in our companion paper (in this issue).  相似文献   

19.
We consider the generalization of the classical P||Cmax problem (assign n jobs to m identical parallel processors by minimizing the makespan) arising when the number of jobs that can be assigned to each processor cannot exceed a given integer k. The problem is strongly NP-hard for any fixed k > 2. We briefly survey lower and upper bounds from the literature. We introduce greedy heuristics, local search and a scatter search approach. The effectiveness of these approaches is evaluated through extensive computational comparison with a depth-first branch-and-bound algorithm that includes new lower bounds and dominance criteria.  相似文献   

20.
We study budgeted variants of classical cut problems: the Multiway Cut problem, the Multicut problem, and the k-Cut problem, and provide approximation algorithms for these problems. Specifically, for the budgeted multiway cut and the k-cut problems we provide constant factor approximation algorithms. We show that the budgeted multicut problem is at least as hard to approximate as the sparsest cut problem, and we provide a bi-criteria approximation algorithm for it.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号