首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper is the second part of a study devoted to the mutual exclusion scheduling problem. Given a simple and undirected graph G and an integer k, the problem is to find a minimum coloring of G such that each color is used at most k times. The cardinality of such a coloring is denoted by χ(G,k). When restricted to interval graphs or related classes like circular-arc graphs and tolerance graphs, the problem has some applications in workforce planning. Unfortunately, the problem is shown to be NP-hard for interval graphs, even if k is a constant greater than or equal to four [H.L. Bodlaender, K. Jansen, Restrictions of graph partition problems. Part I. Theoret. Comput. Sci. 148 (1995) 93-109]. In this paper, the problem is approached from a different point of view by studying a non-trivial and practical sufficient condition for optimality. In particular, the following proposition is demonstrated: if an interval graph G admits a coloring such that each color appears at least k times, then χ(G,k)=⌈n/k⌉. This proposition is extended to several classes of graphs related to interval graphs. Moreover, all our proofs are constructive and provide efficient algorithms to solve the MES problem for these graphs, given a coloring satisfying the condition in input.  相似文献   

2.
Given an undirected multigraph G and a subset of vertices SV (G), the STEINER TREE PACKING problem is to find a largest collection of edge-disjoint trees that each connects S. This problem and its generalizations have attracted considerable attention from researchers in different areas because of their wide applicability. This problem was shown to be APX-hard (no polynomial time approximation scheme unless P=NP). In fact, prior to this paper, not even an approximation algorithm with asymptotic ratio o(n) was known despite several attempts. In this work, we present the first polynomial time constant factor approximation algorithm for the STEINER TREE PACKING problem. The main theorem is an approximate min-max relation between the maximum number of edge-disjoint trees that each connects S (S-trees) and the minimum size of an edge-cut that disconnects some pair of vertices in S (S-cut). Specifically, we prove that if every S-cut in G has at least 26k edges, then G has at least k edge-disjoint S-trees; this answers Kriesells conjecture affirmatively up to a constant multiple. * A preliminary version appeared in the Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science (FOCS) 2004. † The author was supported by an Ontario Graduate Scholarship and a University of Toronto Fellowship.  相似文献   

3.
Paired domination on interval and circular-arc graphs   总被引:1,自引:0,他引:1  
We study the paired-domination problem on interval graphs and circular-arc graphs. Given an interval model with endpoints sorted, we give an O(m+n) time algorithm to solve the paired-domination problem on interval graphs. The result is extended to solve the paired-domination problem on circular-arc graphs in O(m(m+n)) time.  相似文献   

4.
A typical problem in network design is to find a minimum-cost sub-network H of a given network G such that H satisfies some prespecified connectivity requirements. Our focus is on approximation algorithms for designing networks that satisfy vertex connectivity requirements. Our main tool is a linear programming relaxation of the following setpair formulation due to Frank and Jordan: a setpair consists of two subsets of vertices (of the given network G); each setpair has an integer requirement, and the goal is to find a minimum-cost subset of the edges of G sucht hat each setpair is covered by at least as many edges as its requirement. We introduce the notion of skew bisupermodular functions and use it to prove that the basic solutions of the linear program are characterized by “non-crossing families” of setpairs. This allows us to apply Jain’s iterative rounding method to find approximately optimal integer solutions. We give two applications. (1) In the k-vertex connectivity problem we are given a (directed or undirected) graph G=(V,E) with non-negative edge costs, and the task is to find a minimum-cost spanning subgraph H such that H is k-vertex connected. Let n=|V|, and let ε<1 be a positive number such that k≤(1−ε)n. We give an -approximation algorithm for both problems (directed or undirected), improving on the previous best approximation guarantees for k in the range . (2)We give a 2-approximation algorithm for the element connectivity problem, matching the previous best approximation guarantee due to Fleischer, Jain and Williamson. * Supported in part by NSERC researchgran t OGP0138432. † Supported in part by NSF Career Award CCR-9875024.  相似文献   

5.
A new graph triconnectivity algorithm and its parallelization   总被引:1,自引:0,他引:1  
We present a new algorithm for finding the triconnected components of an undirected graph. The algorithm is based on a method of searching graphs called open ear decomposition. A parallel implementation of the algorithm on a CRCW PRAM runs inO(log2 n) parallel time usingO(n+m) processors, wheren is the number of vertices andm is the number of edges in the graph.A preliminary version of this paper was presented at the19th Annual ACM Symposium on Theory of Computing, New York, NY, May 1987.Supported by NSF Grant DCR 8514961.Supported by NSF Grant ECS 8404866 and the Semiconductor Research Corporation Grant 86-12-109.  相似文献   

6.
The Maximum Cardinality Search (MCS) algorithm visits the vertices of a graph in some order, such that at each step, an unvisited vertex that has the largest number of visited neighbours becomes visited. A maximum cardinality search ordering (MCS-ordering) of a graph is an ordering of the vertices that can be generated by the MCS algorithm. The visited degree of a vertex v in an MCS-ordering is the number of neighbours of v that are before v in the ordering. The visited degree of an MCS-ordering ψ of G is the maximum visited degree over all vertices v in ψ. The maximum visited degree over all MCS-orderings of graph G is called its maximum visited degree. Lucena [A new lower bound for tree-width using maximum cardinality search, SIAM J. Discrete Math. 16 (2003) 345-353] showed that the treewidth of a graph G is at least its maximum visited degree.We show that the maximum visited degree is of size O(logn) for planar graphs, and give examples of planar graphs G with maximum visited degree k with O(k!) vertices, for all kN. Given a graph G, it is NP-complete to determine if its maximum visited degree is at least k, for any fixed k?7. Also, this problem does not have a polynomial time approximation algorithm with constant ratio, unless P=NP. Variants of the problem are also shown to be NP-complete.In this paper, we also propose some heuristics for the problem, and report on an experimental analysis of them. Several tiebreakers for the MCS algorithm are proposed and evaluated. We also give heuristics that give upper bounds on the value of the maximum visited degree of a graph, which appear to give results close to optimal on many graphs from real life applications.  相似文献   

7.
In this paper we present approximation algorithms based on a Lagrangian decomposition via a logarithmic potential reduction to solve a general packing or min–max resource sharing problem with M non-negative convex constraints on a convex set B. We generalize a method by Grigoriadis et al. to the case with weak approximate block solvers (i.e., with only constant, logarithmic or even worse approximation ratios). Given an accuracy , we show that our algorithm needs calls to the block solver, a bound independent of the data and the approximation ratio of the block solver. For small approximation ratios the algorithm needs calls to the block solver. As an application we study the problem of minimizing the maximum edge congestion in a multicast communication network. Interestingly the block problem here is the classical Steiner tree problem that can be solved only approximately. We show how to use approximation algorithms for the Steiner tree problem to solve the multicast congestion problem approximately. This work was done in part when the second author was studying at the University of Kiel. This paper combines our extended abstracts of the 2nd IFIP International Conference on Theoretical Computer Science, TCS 2002, Montréal, Canada and the 3rd Workshop on Approximation and Randomization Algorithms in Communication Networks, ARACNE 2002, Roma, Italy. This research was supported in part by the DFG - Graduiertenkolleg, Effiziente Algorithmen und Mehrskalenmethoden; by the EU Thematic Network APPOL I + II, Approximation and Online Algorithms, IST-1999-14084 and IST-2001-32007; by the EU Research Training Network ARACNE, Approximation and Randomized Algorithms in Communication Networks, HPRN-CT-1999-00112; by the EU Project CRESCCO, Critical Resource Sharing for Cooperation in Complex Systems, IST-2001-33135. The second author was also supported by an MITACS grant of Canada; and by the NSERC Discovery Grant DG 5-48923.  相似文献   

8.
9.
Motivated by many recent algorithmic applications, this paper aims to promote a systematic study of the relationship between the topology of a graph and the metric distortion incurred when the graph is embedded into 1 space. The main results are:1. Explicit constant-distortion embeddings of all series-parallel graphs, and all graphs with bounded Euler number. These are the first natural families known to have constant distortion (strictly greater than 1). Using the above embeddings, algorithms are obtained which approximate the sparsest cut in such graphs to within a constant factor.2. A constant-distortion embedding of outerplanar graphs into the restricted class of 1-metrics known as dominating tree metrics. A lower bound of (log n) on the distortion for embeddings of series-parallel graphs into (distributions over) dominating tree metrics is also presented. This shows, surprisingly, that such metrics approximate distances very poorly even for families of graphs with low treewidth, and excludes the possibility of using them to explore the finer structure of 1-embeddability.* A preliminary version of this work appeared in Proceedings of the 40th Annual IEEE Symposium on Foundations of Computer Science, 1999, pp. 399–408. This work was done while the author was at the University of California, Berkeley. Supported in part by NSF grants CCR-9505448 and CCR-9820951.  相似文献   

10.
We present the first polynomial-time approximation algorithm for finding a minimum-cost subgraph having at least a specified number of edges in each cut. This class of problems includes, among others, the generalized Steiner network problem, also called the survivable network design problem. Ifk is the maximum cut requirement of the problem, our solution comes within a factor of 2k of optimal. Our algorithm is primal-dual and shows the importance of this technique in designing approximation algorithms.Research supported by an NSF Graduate Fellowship, DARPA contracts N00014-91-J-1698 and N00014-92-J-1799, and AT&T Bell Laboratories.Research supported in part by Air Force contract F49620-92-J-0125 and DARPA contract N00014-92-J-1799.Part of this work was done while the author was visiting AT&T Bell Laboratories and Bellcore.  相似文献   

11.
Adecomposition of a graphG=(V,E) is a partition of the vertex set into subsets (calledblocks). Thediameter of a decomposition is the leastd such that any two vertices belonging to the same connected component of a block are at distance d. In this paper we prove (nearly best possible) statements, of the form: Anyn-vertex graph has a decomposition into a small number of blocks each having small diameter. Such decompositions provide a tool for efficiently decentralizing distributed computations. In [4] it was shown that every graph has a decomposition into at mosts(n) blocks of diameter at mosts(n) for . Using a technique of Awerbuch [3] and Awerbuch and Peleg [5], we improve this result by showing that every graph has a decomposition of diameterO (logn) intoO(logn) blocks. In addition, we give a randomized distributed algorithm that produces such a decomposition and runs in timeO(log2 n). The construction can be parameterized to provide decompositions that trade-off between the number of blocks and the diameter. We show that this trade-off is nearly best possible, for two families of graphs: the first consists of skeletons of certain triangulations of a simplex and the second consists of grid graphs with added diagonals. The proofs in both cases rely on basic results in combinatorial topology, Sperner's lemma for the first class and Tucker's lemma for the second.A preliminary version of this paper appeared as Decomposing Graphs into Regions of Small Diameter in Proc. 2nd ACM-SIAM Symposium on Discrete Algorithms (1991) 321-330.This work was supported in part by NSF grant DMS87-03541 and by a grant from the Israel Academy of Science.This work was supported in part by NSF grant DMS87-03541 and CCR89-11388.  相似文献   

12.
The computational complexity of the following type of problem is studied. Given a geometric graphG=(P, S) whereP is a set of points in the Euclidean plane andS a set of straight (closed) line segments between pairs of points inP, we want to know whetherG possesses a crossingfree subgraph of a special type. We analyze the problem of detecting crossingfree spanning trees, one factors and two factors in the plane. We also consider special restrictions on the slopes and on the lengths of the edges in the subgraphs.Klaus Jansen acknowledges support by the Deutsche Forschungsgemeinschaft. Gerhard J. Woeginger acknowledges support by the Christian Doppler Laboratorium für Diskrete Optimierung.  相似文献   

13.
The spatial preferred attachment (SPA) model is a model for networked information spaces such as domains of the World Wide Web, citation graphs, and on-line social networks. It uses a metric space to model the hidden attributes of the vertices. Thus, vertices are elements of a metric space, and link formation depends on the metric distance between vertices. We show, through theoretical analysis and simulation, that for graphs formed according to the SPA model it is possible to infer the metric distance between vertices from the link structure of the graph. Precisely, the estimate is based on the number of common neighbours of a pair of vertices, a measure known as co-citation. To be able to calculate this estimate, we derive a precise relation between the number of common neighbours and metric distance. We also analyse the distribution of edge lengths, where the length of an edge is the metric distance between its end points. We show that this distribution has three different regimes, and that the tail of this distribution follows a power law.  相似文献   

14.
The Wiener polynomial of a graph G is a generating function for the distance distribution dd(G)=(D1,D2,…,Dt), where Di is the number of unordered pairs of distinct vertices at distance i from one another and t is the diameter of G. We use the Wiener polynomial and several related generating functions to obtain generating functions for distance distributions of unweighted and weighted graphs that model certain large classes of computer networks. These provide a straightforward means of computing distance and timing statistics when designing new networks or enlarging existing networks.  相似文献   

15.
We describe a deterministic algorithm which, on input integersd, m and real number (0,1), produces a subset S of [m] d ={1,2,3,...,m} d that hits every combinatorial rectangle in [m] d of volume at least , i.e., every subset of [m] d the formR 1×R 2×...×R d of size at least m d . The cardinality of S is polynomial inm(logd)/, and the time to construct it is polynomial inmd/. The construction of such sets has applications in derandomization methods based on small sample spaces for general multivalued random variables.A preliminary version of this paper appeared in Proceedings of the 25th Annual ACM Symposium on Theory of Computing, 1993.Research partially done while visiting the International Computer Science Institute. Research supported in part by a grant from the Israel-USA Binational Science Foundation.A large portion of this research was done while still at the International Computer Science Institute in Berkeley, California. Research supported in part by National Science Foundation operating grants CCR-9304722 and NCR-9416101, and United States-Israel Binational Science Foundation grant No. 92-00226.Supported in part by NSF under grants CCR-8911388 and CCR-9215293 and by AFOSR grants AFOSR-89-0512 AFOSR-90-0008, and by DIMACS, which is supported by NSF grant STC-91-19999 and by the New Jersey Commission on Science and Technology. Research partially done while visiting the International Computer Science Institute.Partially supported by NSF NYI Grant No. CCR-9457799. Most of this research was done while the author was at MIT, partially supported by an NSF Postdoctoral Fellowship. Research partially done while visiting the International Computer Science Institute.  相似文献   

16.
For z1,z2,z3Zn, the tristance d3(z1,z2,z3) is a generalization of the L1-distance on Zn to a quantity that reflects the relative dispersion of three points rather than two. A tristance anticodeAd of diameter d is a subset of Zn with the property that d3(z1,z2,z3)?d for all z1,z2,z3Ad. An anticode is optimal if it has the largest possible cardinality for its diameter d. We determine the cardinality and completely classify the optimal tristance anticodes in Z2 for all diameters d?1. We then generalize this result to two related distance models: a different distance structure on Z2 where d(z1,z2)=1 if z1,z2 are adjacent either horizontally, vertically, or diagonally, and the distance structure obtained when Z2 is replaced by the hexagonal lattice A2. We also investigate optimal tristance anticodes in Z3 and optimal quadristance anticodes in Z2, and provide bounds on their cardinality. We conclude with a brief discussion of the applications of our results to multi-dimensional interleaving schemes and to connectivity loci in the game of Go.  相似文献   

17.
Parallel computation offers a challenging opportunity to speed up the time consuming enumerative procedures that are necessary to solve hard combinatorial problems. Theoretical analysis of such a parallel branch and bound algorithm is very hard and empirical analysis is not straightforward because the performance of a parallel algorithm cannot be evaluated simply by executing the algorithm on a few parallel systems. Among the difficulties encountered are the noise produced by other users on the system, the limited variation in parallelism (the number of processors in the system is strictly bounded) and the waste of resources involved: most of the time, the outcomes of all computations are already known and the only issue of interest is when these outcomes are produced.We will describe a way to simulate the execution of parallel branch and bound algorithms on arbitrary parallel systems in such a way that the memory and cpu requirements are very reasonable. The use of simulation has only minor consequences for the formulation of the algorithm.  相似文献   

18.
The seminal paper of Leighton and Rao (1988) and subsequent papers presented approximate min-max theorems relating multicommodity flow values and cut capacities in undirected networks, developed the divide-and-conquer method for designing approximation algorithms, and generated novel tools for utilizing linear programming relaxations. Yet, despite persistent research efforts, these achievements could not be extended to directed networks, excluding a few cases that are symmetric and therefore similar to undirected networks. This paper is an attempt to remedy the situation. We consider the problem of finding a minimum multicut in a directed multicommodity flow network, and give the first nontrivial upper bounds on the max flow-to-min multicut ratio. Our results are algorithmic, demonstrating nontrivial approximation guarantees.* Supported in part by NSERC research grant OGP0138432. Part of this work was done while visiting AT&T Labs–Research. Work at the Technion supported by Israel Science Foundation grant number 386/99, by BSF grants 96-00402 and 99-00217, by Ministry of Science contract number 9480198, by EU contract number 14084 (APPOL), by the CONSIST consortium (through the MAGNET program of the Ministry of Trade and Industry), and by the Fund for the Promotion of Research at the Technion.  相似文献   

19.
Given a connected graphG=(V, E) with |V|=n and maximum degree such thatG is neither a complete graph nor an odd cycle, Brooks' theorem states thatG can be colored with colors. We generalize this as follows: letG-v be -colored; then,v can be colored by considering the vertices in anO(log n) radius aroundv and by recoloring anO(log n) length augmenting path inside it. Using this, we show that -coloringG is reducible inO(log3 n/log) time to (+1)-vertex coloringG in a distributed model of computation. This leads to fast distributed algorithms and a linear-processorNC algorithm for -coloring.A preliminary version of this paper appeared as part of the paper Improved Distributed Algorithms for Coloring and Network Decomposition Problems, in theProceedings of the ACM Symposium on Theory of Computing pages 581–592, 1992. This research was done when the authors were at the Computer Science Department of Cornell University. The research was supported in part by NSF PYI award CCR-89-96272 with matching funds from UPS and Sun Microsystems.  相似文献   

20.
In this paper, we prove the first approximate max-flow min-cut theorem for undirected multicommodity flow. We show that for a feasible flow to exist in a multicommodity problem, it is sufficient that every cut's capacity exceeds its demand by a factor ofO(logClogD), whereC is the sum of all finite capacities andD is the sum of demands. Moreover, our theorem yields an algorithm for finding a cut that is approximately minimumrelative to the flow that must cross it. We use this result to obtain an approximation algorithm for T. C. Hu's generalization of the multiway-cut problem. This algorithm can in turn be applied to obtain approximation algorithms for minimum deletion of clauses of a 2-CNF formula, via minimization, and other problems. We also generalize the theorem to hypergraph networks; using this generalization, we can handle CNF clauses with an arbitrary number of literals per clause.Most of the results in this paper were presented in preliminary form in Approximation through multicommodity flow,Proceedings, 31th Annual Symposium on Foundations of Computer Science (1990), pp. 726–737.Research supported by the National Science Foundation under NSF grant CDA 8722809, by the Office of Naval and the Defense Advanced Research Projects Agency under contract N00014-83-K-0146, and ARPA Order No. 6320, Amendament 1.Research supported by NSF grant CCR-9012357 and by an NSF Presidential Young Investigator Award.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号