首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, sequential and parallel algorithms are presented to find a maximum independent set with largest weight in a weighted permutation graph. The sequential algorithm, which is designed based on dynamic programming, runs in timeO(nlogn) and requiresO(n) space. The parallel algorithm runs inO(log2 n) time usingO(n 3/logn) processors on the CREW PRAM, orO(logn) time usingO(n 3) processors on the CRCW PRAM.  相似文献   

2.
In this paper, we present parallel quicksort algorithms running inO((n/p+logp) logn) expected time andO((n/p+logp+log logn) logn) deterministic time respectively, and both withO(n) space by usingp processors on EREW PRAM. Whenp=O(n/logn), the cost is optimal, in terms of the product of time and number of processors. These algorithms can be used to obtain parallel algorithms for constructing balanced binary search trees without using sorting algorithms. One of our quicksort algorithms leads to a parallel quickhull algorithm on EREW PRAM.The work of this author was partially supported by a fellowship from the College of Science, Old Dominion University, Norfolk, VA 23529, USA.  相似文献   

3.
An algorithm for finding a polygon with minimum number of edges nested in two simplen-sided polygons is presented. The algorithm solves the problem in at mostO(n logn) time, and improves the time complexity of two previousO(n 2) algorithms.The work was supported by NSERC grant OPG0041629.  相似文献   

4.
Parallel algorithms for analyzing activity networks are proposed which include feasibility test, topological ordering of the events, and computing the earliest and latest start times for all activities and hence identification of the critical activities of the activity network. The first two algorithms haveO(logn) time complexity and the remaining one achievesO(logd log logn) time bound, whered is the diameter of the digraph representing the activity network withn nodes. All these algorithms work on a CRCW PRAM and requireO(n 3) processors.  相似文献   

5.
A new algorithm for rearranging a heap is presented and analysed in the average case. The average case upper bound for deleting the maximum element of a random heap is improved, and is shown to be less than [logn]+0.299+M(n) comparisons, *) whereM(n) is between 0 and 1. It is also shown that a heap can be constructed using 1.650n+O(logn) comparisons with this algorithm, the best result for any algorithm which does not use any extra space. The expected time to sortn elements is argued to be less thann logn+0.670n+O(logn), while simulation result points at an average case ofn log n+0.4n which will make it the fastest in-place sorting algorithm. The same technique is used to show that the average number of comparisons when deleting the maximum element of a heap using Williams' algorithm for rearrangement is 2([logn]–1.299+L(n)) whereL(n) also is between 0 and 1, and the average cost for Floyd-Williams Heapsort is at least 2nlogn–3.27n, counting only comparisons. An analysis of the number of interchanges when deleting the maximum element of a random heap, which is the same for both algorithms, is also presented.  相似文献   

6.
We propose a parallel algorithm which reduces the problem of computing Hamiltonian cycles in tournaments to the problem of computing Hamiltonian paths. The running time of our algorithm is O(log n) using O(n2/log n) processors on a CRCW PRAM, and O(log n log log n) on an EREW PRAM using O(n2/log n log log n) processors. As a corollary, we obtain a new parallel algorithm for computing Hamiltonian cycles in tournaments. This algorithm can be implemented in time O(log n) using O(n2/log n) processors in the CRCW model and in time O(log2n) with O(n2/log n log log n) processors in the EREW model.  相似文献   

7.
AnO(n logn) divide-and-conquer algorithm for finding the relative neighborhood graph RNG(V) of a set V ofn points in Euclidean space is presented. If implemented in parallel, its time complexity isO(n) and it requiresO(logn) processors.  相似文献   

8.
In this paper, parallel algorithms are presented for solving some problems on permutation graphs. The coloring problem is solved inO(log2 n) time usingO(n 3/logn) processors on the CREW PRAM, orO(logn) time usingO(n 3) processors on the CRCW PRAM. The weighted clique problem, the weighted independent set problem, the cliques cover problem, and the maximal layers problem are all solved with the same complexities. We can also show that the longest common subsequence problem belongs to the class NC.  相似文献   

9.
We give improved solutions for the problem of generating thek smallest spanning trees in a graph and in the plane. Our algorithm for general graphs takes timeO(m log(m, n)=k 2); for planar graphs this bound can be improved toO(n+k 2). We also show that thek best spanning trees for a set of points in the plane can be computed in timeO(min(k 2 n+n logn,k 2+kn log(n/k))). Thek best orthogonal spanning trees in the plane can be found in timeO(n logn+kn log log(n/k)+k 2).  相似文献   

10.
We introduce the notion ofsearchability as a property of an in place merging algorithm. We show that a pair of sorted arrays can be merged in place in linear time, so that a search can be performed in logarithmic time at any point during the merging process. We apply this method to devise an implicit data structure which can support searches inO(log2 n) time in the worst case, andO(logn) on the average, and insertions inO(logn) time, in the worst case.This research was partly supported by NSERC under grant A8237 and presented in preliminary form at the 10th International Colloquium on Automata, Languages and Programming.On leave from the University of Chile.  相似文献   

11.
One of the fundamental algorithmic problems in computer science involves selecting thekth smallest element in a setS ofn elements. In this paper we present a fast selection algorithm which runs inO(n 1/8 logn) time on a mesh with multiple broadcasting of sizen 3/8 ×n 5/8. Our result shows that, just like semigroup computations, selection can be done much faster on a suitably chosen rectangular mesh than on square meshes. We also show that if every processor can storen 1/9 items, then our selection algorithm runs inO(n 1/9 logn) time on a mesh with multiple broadcasting of sizen 1/3 ×n 5/9.Work supported by NASA under grant NCC1-99.This author was partly supported by NSF grant CCR-8009996.  相似文献   

12.
Ray Shooting Amidst Convex Polygons in 2D   总被引:1,自引:0,他引:1  
We consider the problem of ray shooting in a two-dimensional scene consisting ofmconvex polygons with a total ofnedges. We present a data structure that requiresO(mn log m) space and preprocessing time and that answers a ray shooting query inO(log2 m log2 n) time. If the polygons are pairwise disjoint, the space and preprocessing time can be improved toO((m2+n)log m) andO((m2+n log n)log m), respectively. Our algorithm also works for a collection of disjoint simple polygons. We also show that if we allow onlyO(n) space, a ray shooting query among a collection of disjoint simple polygons can be answered in timeO(m/[formula]1+ log2 n) time, for any >0.  相似文献   

13.
Efficient parallel algorithms are presented, on the CREW PRAM model, for generating a succinct encoding of all pairs shortest path information in a directed planar graphG with real-valued edge costs but no negative cycles. We assume that a planar embedding ofG is given, together with a set ofq faces that cover all the vertices. Then our algorithm runs inO(log2 n) time and employsO(nq+M(q)) processors (whereM(t) is the number of processors required to multiply twot×t matrices inO(logt) time). Let us note here that wheneverq<n then our processor bound is better than the best previous one (M(n)).O(log2 n) time,n-processor algorithms are presented for various subproblems, including that of generating all pairs shortest path information in a directedouterplanar graph. Our work is based on the fundamental hammock-decomposition technique of G. Frederickson. We achieve this decomposition inO(logn log*n) parallel time by usingO(n) processors. The hammock-decomposition seems to be a fundamental operation that may help in improving efficiency of many parallel (and sequential) graph algorithms.This work was partially supported by the EEC ESPRIT Basic Research Action No. 3075 (ALCOM) and by the Ministry of Industry, Energy and Technology of Greece.  相似文献   

14.
In this paper, Tseng and Lee's parallel algorithm to solve the stable marriage prolem is analyzed. It is shown that the average number of parallel proposals of the algorithm is of ordern by usingn processors on a CREW PRAM, where each parallel proposal requiresO(loglog(n) time on CREW PRAM by applying the parallel selection algorithms of Valiant or Shiloach and Vishkin. Therefore, our parallel algorithm requiresO(nloglog(n)) time. The speed-up achieved is log(n)/loglog(n) since the average number of proposals required by applying McVitie and Wilson's algorithm to solve the stable marriage problem isO(nlog(n)).  相似文献   

15.
Summary Nested dissection is an algorithm invented by Alan George for preserving sparsity in Gaussian elimination on symmetric positive definite matrices. Nested dissection can be viewed as a recursive divide-and-conquer algorithm on an undirected graph; it usesseparators in the graph, which are small sets of vertices whose removal divides the graph approximately in half. George and Liu gave an implementation of nested dissection that used a heuristic to find separators. Lipton and Tarjan gave an algorithm to findn 1/2-separators in planar graphs and two-dimensional finite element graphs, and Lipton, Rose, and Tarjan used these separators in a modified version of nested dissection, guaranteeing bounds ofO (n logn) on fill andO(n 3/2) on operation count. We analyze the combination of the original George-Liu nested dissection algorithm and the Lipton-Tarjan planar separator algorithm. This combination is interesting because it is easier to implement than the Lipton-Rose-Tarjan version, especially in the framework of existïng sparse matrix software. Using some topological graph theory, we proveO(n logn) fill andO(n 3/2) operation count bounds for planar graphs, twodimensional finite element graphs, graphs of bounded genus, and graphs of bounded degree withn 1/2-separators. For planar and finite element graphs, the leading constant factor is smaller than that in the Lipton-Rose-Tarjan analysis. We also construct a class of graphs withn 1/2-separators for which our algorithm does not achieve anO(n logn) bound on fill.The work of this author was supported in part by the Hertz Foundation under a graduate fellowship and by the National Science Foundation under Grant MCS 82-02948The work of this author was supported in part by the National Science Foundation under Grant MCS 78-26858 and by the Office of Naval Research under Contract N00014-76-C-0688  相似文献   

16.
Recently, Fredman and Tarjan invented a new, especially efficient form of heap (priority queue). Their data structure, theFibonacci heap (or F-heap) supports arbitrary deletion inO(logn) amortized time and other heap operations inO(1) amortized time. In this paper we use F-heaps to obtain fast algorithms for finding minimum spanning trees in undirected and directed graphs. For an undirected graph containingn vertices andm edges, our minimum spanning tree algorithm runs inO(m logβ (m, n)) time, improved fromO((m, n)) time, whereβ(m, n)=min {i|log(i) nm/n}. Our minimum spanning tree algorithm for directed graphs runs inO(n logn + m) time, improved fromO(n log n +m log log log(m/n+2) n). Both algorithms can be extended to allow a degree constraint at one vertex. Research supported in part by National Science Foundation Grant MCS-8302648. Research supported in part by National Science Foundation Grant MCS-8303139. Research supported in part by National Science Foundation Grant MCS-8300984 and a United States Army Research Office Program Fellowship, DAAG29-83-GO020.  相似文献   

17.
Split trees are suitable data structures for storing records with different access frequencies. Under assumption that the access frequencies are all distinct, Huang has proposed anO(n 4 logm) time algorithm to construct an (m+1)-way split tree for a set ofn keys. In this paper, we generalize Huang's algorithm to deal with the case of non-distinct access frequencies. The technique used in the generalized algorithm is a generalization of Hesteret al.'s, where the binary case was considered. The generalized algorithm runs inO(n 5 logm) time.  相似文献   

18.
This paper presents fast parallel algorithms for the following graph theoretic problems: breadth-depth search of directed acyclic graphs; minimum-depth search of graphs; finding the minimum-weighted paths between all node-pairs of a weighted graph and the critical activities of an activity-on-edge network. The first algorithm hasO(logdlogn) time complexity withO(n 3) processors and the remaining algorithms achieveO(logd loglogn) time bound withO(n 2[n/loglogn]) processors, whered is the diameter of the graph or the directed acyclic graph (which also represents an activity-on-edge network) withn nodes. These algorithms work on an unbounded shared memory model of the single instruction stream, multiple data stream computer that allows both read and write conflicts.  相似文献   

19.
In this paper we develop some new data structures for storing a set of disks that can answer different types of intersection queries efficiency. If the disks are non-intersecting we obtain a linear size data structure that can report allk disks intersecting a query line segment in timeO(n + +k), wheren is the number of disks,=log2(1+5)–1 0.695, and is an arbitrarily small positive constant. If the segment is a full line, the query time becomesO(n +k). For intersecting disks we obtain anO(n logn) size data structure that can answer an intersection query in timeO(n 2/3 log2 n+k). We also present a linear size data structure for ray shooting queries, whose query time isO(n ).The research of the first two authors was supported by the ESPRIT Basic Research Action No. 3075 (project ALCOM). The work of the third author was supported byDimacs (Center for Discrete Mathematics and Theoretical Computer Science), a National Science Foundation Science and Technology Center — NSF-STC88-09648.  相似文献   

20.
Cheriyan and Hagerup developed a randomized algorithm to compute the maximum flow in a graph with n nodes and m edges in O(mn + n2 log2n) expected time. The randomization is used to efficiently play a certain combinatorial game that arises during the computation. We give a version of their algorithm where a general version of their game arises. Then we give a strategy for the game that yields a deterministic algorithm for computing the maximum flow in a directed graph with n nodes and m edges that runs in time O(mn(logm/n log nn)). Our algorithm gives an O(mn) deterministic algorithm for all m/n = Ω(nε) for any positive constant ε, and is currently the fastest deterministic algorithm for computing maximum flow as long as m/n = ω(log n).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号