首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We give improved solutions for the problem of generating thek smallest spanning trees in a graph and in the plane. Our algorithm for general graphs takes timeO(m log(m, n)=k 2); for planar graphs this bound can be improved toO(n+k 2). We also show that thek best spanning trees for a set of points in the plane can be computed in timeO(min(k 2 n+n logn,k 2+kn log(n/k))). Thek best orthogonal spanning trees in the plane can be found in timeO(n logn+kn log log(n/k)+k 2).  相似文献   

2.
We analyse a greedy heuristic for finding small dominating sets in graphs: bounds on the size of the dominating set so produced had previously been derived in terms of the size of a smallest dominating set and the number of vertices and edges in the graph, respectively, We show that computing the resulting small dominating set isP-hard and so cannot be done efficiently in parallel (in the context of the PRAM model of parallel computation). We also consider a related non-deterministic greedy heuristic.  相似文献   

3.
This paper shows that it is possible to find all isomorphic subtrees of a binary tree in linear time and space.  相似文献   

4.
An arborescence of a multihop radio network is a directed spanning tree (with rootx) such that the edges are directed away from the root. Based upon an arborescence,x canbroadcast a message to other nodes according to the directed edges of the spanning tree. The minimum transmission power arborescence problem is to find an arborescence such that the message can be broadcasted to other nodes by using a minimal amount of transmission power. The minimum delay arborescence problem is to find an arborescence such that a message can be broadcasted to other nodes by using a minimal number of broadcast transmission. In this paper we show that both these problems areNP-complete. The reductions are from the maximum leaf spanning tree problem.Areverse arborescence is similar to an arborescence except that the edges are directed toward the root. Based upon a reverse arborescence, the root node cancollect information from other nodes. In this paper we also show that the reverse minimum transmission power arborescence problem can be solved with the same computational complexity as that of finding a minimum cost spanning tree, and the reverse minimum delay arborescence problem can be solved with the same computational complexity as that of finding a spanning tree.  相似文献   

5.
The partitioning of the vertices of an undirected graph, in a way that makes its quotient graph a tree, mirrors a way of permuting a square symmetric matrix to allow its factoring with little fill-in. We analyze the complexity of finding the best partitioning and show that it is NP-complete. We also give a new and simpler implementation of an algorithm that finds a maximal quotient tree.  相似文献   

6.
LetG be a bipartite graph with natural edge weights, and letW be a function from the set of vertices ofG into natural numbers. AW-matching ofG is a subset of the set of edges ofG such that for each vertexv the total weight of edges in the subset incident tov does not exceedW(v). Letm be a natural number. We show that the problem of deciding whether there is aW-matching inG whose total weight is not less thanm is NP-complete even ifG is bipartite and its edge weights as well as theW(v)-constraints are constantly bounded.  相似文献   

7.
Approximating maximum independent sets by excluding subgraphs   总被引:5,自引:0,他引:5  
An approximation algorithm for the maximum independent set problem is given, improving the best performance guarantee known toO(n/(logn)2). We also obtain the same performance guarantee for graph coloring. The results can be combined into a surprisingly strongsimultaneous performance guarantee for the clique and coloring problems.The framework ofsubgraph-excluding algorithms is presented. We survey the known approximation algorithms for the independent set (clique), coloring, and vertex cover problems and show how almost all fit into that framework. We show that among subgraph-excluding algorithms, the ones presented achieve the optimal asymptotic performance guarantees.A preliminary version of this paper appeared in [9].Supported in part by National Science Foundation Grant CCR-8902522 and PYI Award CCR-9057488.Research done at Rutgers University. Supported in part by Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) fellowship.  相似文献   

8.
A number of problems concerning sets of points in the plane are studied, e.g. establishing whether it contains a subset of size 4, which are the vertices of a square or rectangle. Both the problems of finding axis-parallel squares and rectangles, and arbitrarily oriented squares and rectangles are studied. Efficient algorithms are obtained for all of them. Furthermore, we investigate the generalizations tod-dimensional space, where the problem is to find hyperrectangles and hypercubes. Also, upper and lower bounds are given for combinatorial problems on the maxium number of subsets of size 4, of which the points are the vertices of a square or rectangle. Then we state an equivalence between the problem of finding rectangles, and the problem of findingK 2, 2 subgraphs in bipartite graphs. Thus we immediately have an efficient algorithm for this graph problem.This work was partially supported by the ESPRIT Basic Research Action No. 3075 (project ALCOM). Work of the second author was also supported by the Dutch Organisation for Scientific Research (N.W.O.).  相似文献   

9.
A special clustering problem is discussed in this paper, called the compact set problem. The goal of the problem is to find all compact sets in a complete, weighted, undirected graph withn vertices. A subsetC of vertices is called a compact set if 1<|C|<n and the maximum weight among all edges inC is smaller than the minimum weight among all edges connecting one vertex inC and the other vertex not inC. An algorithm with complexityO(n 2) is given for the problem improving the previous results.This research was partially supported by the National Science Council of the Republic of China under Grant NSC81-0408-E-216-502.  相似文献   

10.
The problem of superscalar instruction scheduling is studied and an analysis of a heuristic scheduling algorithm is presented. First, a superscalar architecture is characterized byk, the number of types of functional units employed,m i , the number of typei functional units,P ij , thejth functional unit of typei, andz, the maximal number of delay cycles incurred by the execution of instructions. A program trace to be scheduled is modeled by a directed acyclic graph with delay on precedence relations. These two models reflect most of the flavor of the superscalar instruction scheduling problem. A heuristic scheduling algorithm called the ECG-algorithm is designed by compiling two scheduling guidelines. The performance of the ECG-algorithm is evaluated through worst-case analysis. Lettingw ECG denote the length of an ECG-schedule andw opt the length of an optimal schedule, we established the boundwv ECG /w opt k+1–2/[max{m i }(z+1)], which is smaller than other known bounds.  相似文献   

11.
In this paper, we introduce an implicit data structure which represents a forest-structured partial order to efficiently perform, with respect to time and space, the following operations: 1) testing the relation among two elements (checking whether the two elements are related) and 2) given an elementu, searching for its father. The first operation can be performed in constant time, while the second one requires polylog time (logarithmic in the case of bounded degree). The data structure represents the order relation by referring only to internal nodes of the forest, thus achieving in many cases a significant saving in space occupation. Finally, the algorithm is shown to be optimal in a restricted computation model by deriving a lower bound on the space complexity within such a model.Work partially performed in the framework of Esprit BRA project 3075 Algorithms and Complexity and of Italian MURST 40% project Algoritmi e Strutture di Calcolo and partially supported by the National Research Council (CNR) Project Sistemi Informatici e Calcolo Parallelo.  相似文献   

12.
Relaxed outer projections,weighted averages and convex feasibility   总被引:2,自引:0,他引:2  
A new algorithmic scheme is proposed for finding a common point of finitely many closed convex sets. The scheme uses weighted averages (convex combinations) of relaxed projections onto approximating halfspaces. By varying the weights we generalize Cimmino's and Auslender's methods as well as more recent versions developed by Iusem & De Pierro and Aharoni & Censor. Our approach offers great computational flexibility and encompasses a wide variety of known algorithms as special instances. Also, since it is block-iterative, it lends itself to parallel processing.The research of the first author was partially supported by Ruhrgas under a NAVF grant.  相似文献   

13.
An algorithm is presented for reconstructing visible regions from visible edge segments in object space. This has applications in hidden surface algorithms operating on polyhedral scenes and in cartography. A special case of reconstruction can be formulated as a graph problem: Determine the faces of a straight-edge planar graph given in terms of its edges. This is accomplished inO(n logn) time using linear space for a graph withn edges, and is worst-case optimal. The graph may have separate components but the components must not contain each other. The general problem of reconstruction is then solved by applying our algorithm to each component in the containment relation.Research of this author is supported by the National Science Foundation under grant no. ECS-8351942, and by the Schlumberger-Doll Research Labs, Ridgefield, Connecticut.  相似文献   

14.
This paper examines a partial match retrieval scheme which supports range queries for highly dynamic databases. The scheme relies on order preserving multi-attribute hashing. In general, designing optimal indexes is NP-hard. Greedy algorithms used to determine the optimal indexes for simple partial match queries are not directly applicable because there are a larger number of queries to consider in determining the optimal indexes. In this paper we present heuristic algorithms which provide near-optimal solutions. The optimisation scheme we propose can be used to design other dynamic file structures such as the grid file, BANG file and multilevel grid file to further enhance their retrieval performance taking into consideration the query distribution.  相似文献   

15.
We considered the following natural conjecture: For every sorting algorithm every key will be involved in(logn) comparisons for some input. We show that this is true for most of the keys and prove matching upper and lower bounds. Every sorting algorithm for some input will involvenn /2+1 keys in at leastlog2 n comparisons,>0. Further, there exists a sorting algorithm that will for every input involve at mostnn /c keys in greater thanlog2 n comparisons, wherec is a constant and>0. The conjecture is shown to hold for natural algorithms from the literature.  相似文献   

16.
Computing a maximum independent set, weighted or unweighted, isNP-hard for general as well as planar graphs. However, polynomial time algorithms do exist for solving this problem on special classes of graphs. In this paper we present an efficient algorithm for computing a maximum weight independent set in trees. A divide and conquer approach based on centroid decomposition of trees is used to compute a maximum weight independent set withinO(n logn) time, wheren is the number of vertices in the tree. We introduce a notion of analternating tree which is crucial in obtaining a new independent set from the previous one.  相似文献   

17.
The maximum weight independent set problem for a general graph is NP-hard. But for some special classes of graphs, polynomial time algorithms do exist for solving it. Based on the divide-and-conquer strategy, Pawagi has presented anO(|V|log|V|) time algorithm for solving this problem on a tree. In this paper, we propose anO(|V|) time algorithm to improve Pawagi's result. The proposed algorithm is based on the dynamic programming strategy and is time optimal within a constant factor.  相似文献   

18.
An efficient and numerically correct program called FastHull for computing the convex hulls of finite point sets in the plane is presented. It is based on the Akl-Toussaint algorithm and the MergeHull algorithm. Numerical correctness of the FastHull procedure is ensured by using special routines for interval arithmetic and multiple precision arithmetic. The FastHull algorithm guaranteesO(N logN) running time in the worst case and has linear time performance for many kinds of input patterns. It appears that the FastHull algorithm runs faster than any currently known 2D convex hull algorithm for many input point patterns.  相似文献   

19.
We obtain new results for manipulating and searching semi-dynamic planar convex hulls (subject to deletions only), and apply them to derive improved bounds for two problems in geometry and scheduling. The new convex hull results are logarithmic time bounds for set splitting and for finding a tangent when the two convex hulls are not linearly separated. Using these results, we solve the following two problems optimally inO(n logn) time: (1) [matching] givenn red points andn blue points in the plane, find a matching of red and blue points (by line segments) in which no two edges cross, and (2) [scheduling] givenn jobs with due dates, linear penalties for late completion, and a single machine on which to process them, find a schedule of jobs that minimizes the maximum penalty.  相似文献   

20.
A well-known simple heuristic algorithm for solving the all-nearest-neighbors problem in thek-dimensional Euclidean spaceE k ,k>1, projects the given point setS onto thex-axis. For each pointq S a nearest neighbor inS under anyL p -metric (1 p ) is found by sweeping fromq into two opposite directions along thex-axis. If q denotes the distance betweenq and its nearest neighbor inS the sweep process stops after all points in a vertical 2 q -slice centered aroundq have been examined. We show that this algorithm solves the all-nearest-neighbors problem forn independent and uniformly distributed points in the unit cube [0,1] k in (n 2–1/k ) expected time, while its worst-case performance is (n 2).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号