首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recently, Fredman and Tarjan invented a new, especially efficient form of heap (priority queue). Their data structure, theFibonacci heap (or F-heap) supports arbitrary deletion inO(logn) amortized time and other heap operations inO(1) amortized time. In this paper we use F-heaps to obtain fast algorithms for finding minimum spanning trees in undirected and directed graphs. For an undirected graph containingn vertices andm edges, our minimum spanning tree algorithm runs inO(m logβ (m, n)) time, improved fromO((m, n)) time, whereβ(m, n)=min {i|log(i) nm/n}. Our minimum spanning tree algorithm for directed graphs runs inO(n logn + m) time, improved fromO(n log n +m log log log(m/n+2) n). Both algorithms can be extended to allow a degree constraint at one vertex. Research supported in part by National Science Foundation Grant MCS-8302648. Research supported in part by National Science Foundation Grant MCS-8303139. Research supported in part by National Science Foundation Grant MCS-8300984 and a United States Army Research Office Program Fellowship, DAAG29-83-GO020.  相似文献   

2.
Very recently a new data structure, called a min-max heap, was presented for implementing the double-ended priority queue. A min-max heap onn keys is constructed inO(n) time; the minimum and maximum keys are found in constant time, and the operations of deleting the minimum, deleting the maximum and inserting a new key into the heap are performed inO(logn) time. In addition, the data structure can be stored implicitly, i.e. in an array ofn elements without using any additional pointers.In this paper, we present lower bound results on the number of comparisons required, in the worst case, for the operations i) to construct a min-max heap on a given set of keys; ii) to convert a min-max heap into a max-min heap; and iii) to merge two min-max heaps into one min-max heap. New upper bounds for the convert and merge operations are also derived. It is found that the main difference between traditional heaps and min-max heaps lies in the time needed to perform the merge operation. While traditional heaps can be merged efficiently, it is shown that min-max heaps are not sublinearly mergeable. Even the seemingly simple task of converting a min-max heap into a max-min heap cannot be performed in less than linear time.This research was supported by the Natural Science and Engineering Council of Canada under Grant No. A0392. (A preliminary version of this paper was presented at the 24th Annual Allerton Conference on Communication, Control and Computing.)  相似文献   

3.
A VLSI sorter of sizeO(n) can sortn elements in linear time when the input and output time are taken into account. If the input contains more thann elements, some prepocessing has to be performed. A VLSI partition algorithm that provides a solution to this problem is presented. The algorithm partitions the input data into two smaller parts as the quicksort algorithm does. That is, the elements of the first part will be smaller than the elements of the second part. The partition is repeated until the parts are small enough to fit in the sorter. It is shown that the average number of times each element must go through the partitioner isO(logk) for a data file of sizekn wheren is the size of the sorter. In the worst case where the partitioner fails to divide the input evenly, the elements must goO(k) times through the partitioner like in the quicksort algorithm. The partitioner can also be used, with simple modifications, as a sorter, a stack, a queue, or as a priority queue. Other advantages of the VLSI algorithm are also discussed.  相似文献   

4.
We present a data structure that can store a set of disjoint fat objects ind-space such that point location and bounded-size range searching with arbitrarily shaped ranges can be performed efficiently. The structure can deal with either arbitrary (fat) convex objects or nonconvex (fat) polytopes. The multipurpose data structure supports point location and range searching queries in timeO(logd−1 n) and requiresO(n logd−1 n) storage, afterO(n logd−1 n log log n) preprocessing. The data structure and query algorithm are rather simple.  相似文献   

5.
We introduce the notion ofsearchability as a property of an in place merging algorithm. We show that a pair of sorted arrays can be merged in place in linear time, so that a search can be performed in logarithmic time at any point during the merging process. We apply this method to devise an implicit data structure which can support searches inO(log2 n) time in the worst case, andO(logn) on the average, and insertions inO(logn) time, in the worst case.This research was partly supported by NSERC under grant A8237 and presented in preliminary form at the 10th International Colloquium on Automata, Languages and Programming.On leave from the University of Chile.  相似文献   

6.
We prove a theorem on partitioning point sets inE d (d fixed) and give an efficient construction of partition trees based on it. This yields a simplex range searching structure with linear space,O(n logn) deterministic preprocessing time, andO(n 1?1/d (logn) O(1)) query time. WithO(nlogn) preprocessing time, where δ is an arbitrary positive constant, a more complicated data structure yields query timeO(n 1?1/d (log logn) O(1)). This attains the lower bounds due to Chazelle [C1] up to polylogarithmic factors, improving and simplifying previous results of Chazelleet al. [CSW]. The partition result implies that, forr dn 1?δ, a (1/r)-approximation of sizeO(r d) with respect to simplices for ann-point set inE d can be computed inO(n logr) deterministic time. A (1/r)-cutting of sizeO(r d) for a collection ofn hyperplanes inE d can be computed inO(n logr) deterministic time, provided thatrn 1/(2d?1).  相似文献   

7.
In this paper, sequential and parallel algorithms are presented to find a maximum independent set with largest weight in a weighted permutation graph. The sequential algorithm, which is designed based on dynamic programming, runs in timeO(nlogn) and requiresO(n) space. The parallel algorithm runs inO(log2 n) time usingO(n 3/logn) processors on the CREW PRAM, orO(logn) time usingO(n 3) processors on the CRCW PRAM.  相似文献   

8.
In this paper, parallel algorithms are presented for solving some problems on permutation graphs. The coloring problem is solved inO(log2 n) time usingO(n 3/logn) processors on the CREW PRAM, orO(logn) time usingO(n 3) processors on the CRCW PRAM. The weighted clique problem, the weighted independent set problem, the cliques cover problem, and the maximal layers problem are all solved with the same complexities. We can also show that the longest common subsequence problem belongs to the class NC.  相似文献   

9.
We present a parallel algorithm for finding the convex hull of a sorted set of points in the plane. Our algorithm runs inO(logn/log logn) time usingO(n log logn/logn) processors in theCommon crcw pram computational model, which is shown to be time and cost optimal. The algorithm is based onn 1/3 divide-and-conquer and uses a simple pointer-based data structure.Part of this work was done when the last three authors were at the Department of Computer and Information Science, Linköping University. The research of the second author was supported by the Academy of Finland.  相似文献   

10.
A dynamic data structure is given that maintains the minimal distance in a set ofn points ink-dimensional space inO((logn) k log logn) amortized time per update. The size of the data structure is bounded byO(n(logn) k ). Distances are measured in the MinkowskiL t -metric, where 1 t . This is the first dynamic data structure that maintains the minimal distance in polylogarithmic time for fully on-line updates.This work was supported by the ESPRIT II Basic Research Actions Program, under Contract No. 3075 (project ALCOM).  相似文献   

11.
Ray Shooting Amidst Convex Polygons in 2D   总被引:1,自引:0,他引:1  
We consider the problem of ray shooting in a two-dimensional scene consisting ofmconvex polygons with a total ofnedges. We present a data structure that requiresO(mn log m) space and preprocessing time and that answers a ray shooting query inO(log2 m log2 n) time. If the polygons are pairwise disjoint, the space and preprocessing time can be improved toO((m2+n)log m) andO((m2+n log n)log m), respectively. Our algorithm also works for a collection of disjoint simple polygons. We also show that if we allow onlyO(n) space, a ray shooting query among a collection of disjoint simple polygons can be answered in timeO(m/[formula]1+ log2 n) time, for any >0.  相似文献   

12.
A parallel algorithm for depth-first searching of a directed acyclic graph (DAG) on a shared memory model of a SIMD computer is proposed. The algorithm uses two parallel tree traversal algorithms, one for the preorder traversal and the other for therpostorder traversal of an ordered tree. Each of these traversal algorithms has a time complexity ofO(logn) whenO(n) processors are used,n being the number of vertices in the tree. The parallel depth-first search algorithm for a directed acyclic graphG withn vertices has a time complexity ofO((logn)2) whenO(n 2.81/logn) processors are used.  相似文献   

13.
Efficient parallel algorithms are presented, on the CREW PRAM model, for generating a succinct encoding of all pairs shortest path information in a directed planar graphG with real-valued edge costs but no negative cycles. We assume that a planar embedding ofG is given, together with a set ofq faces that cover all the vertices. Then our algorithm runs inO(log2 n) time and employsO(nq+M(q)) processors (whereM(t) is the number of processors required to multiply twot×t matrices inO(logt) time). Let us note here that wheneverq<n then our processor bound is better than the best previous one (M(n)).O(log2 n) time,n-processor algorithms are presented for various subproblems, including that of generating all pairs shortest path information in a directedouterplanar graph. Our work is based on the fundamental hammock-decomposition technique of G. Frederickson. We achieve this decomposition inO(logn log*n) parallel time by usingO(n) processors. The hammock-decomposition seems to be a fundamental operation that may help in improving efficiency of many parallel (and sequential) graph algorithms.This work was partially supported by the EEC ESPRIT Basic Research Action No. 3075 (ALCOM) and by the Ministry of Industry, Energy and Technology of Greece.  相似文献   

14.
We give two optimal parallel algorithms for constructing the arrangement ofn lines in the plane. The first nethod is quite simple and runs inO(log2 n) time usingO(n 2) work, and the second method, which is more sophisticated, runs inO(logn) time usingO(n 2) work. This second result solves a well-known open problem in parallel computational geometry, and involves the use of a new algorithmic technique, the construction of an -pseudocutting. Our results immediately imply that the arrangement ofn hyperplanes in d inO(logn) time usingO(n d) work, for fixedd, can be optimally constructed. Our algorithms are for the CREW PRAM.This research was supported by the National Science Foundation under Grants CCR-8810568 and CCR-9003299, and by the NSF and DARPA under Grant CCR-8908092.  相似文献   

15.
1.IntroductionLetG=(V,E,W)beaconnected,weightedandundirectedgraph,VeEE,w(e)(相似文献   

16.
The theoretical presentation and analysis is given for two families of simple in-place merging algorithms and their limiting cases. The first family merges stably inO(k·n) time andO(n 1/k ) additional space with a limiting case running inO(n logn) time and constant space. The second family merges unstably inO (k ·n) time andO(log k n) space with a limiting case running inO(nG(n)) time and constant space. HereG(n) is the leastk such thatF(k) n whereF(0)=1 andF(i)=2 F(i–1) fori1. Each algorithm gives rise to a corresponding merge sort.  相似文献   

17.
The range-searching problems that allow efficient partition trees are characterized as those defined by range spaces of finite Vapnik-Chervonenkis dimension. More generally, these problems are shown to be the only ones that admit linear-size solutions with sublinear query time in the arithmetic model. The proof rests on a characterization of spanning trees with a low stabbing number. We use probabilistic arguments to treat the general case, but we are able to use geometric techniques to handle the most common range-searching problems, such as simplex and spherical range search. We prove that any set ofn points inE d admits a spanning tree which cannot be cut by any hyperplane (or hypersphere) through more than roughlyn 1–1/d edges. This result yields quasi-optimal solutions to simplex range searching in the arithmetic model of computation. We also look at polygon, disk, and tetrahedron range searching on a random access machine. Givenn points inE 2, we derive a data structure of sizeO(n logn) for counting how many points fall inside a query convexk-gon (for arbitrary values ofk). The query time isO(kn logn). Ifk is fixed once and for all (as in triangular range searching), then the storage requirement drops toO(n). We also describe anO(n logn)-size data structure for counting how many points fall inside a query circle inO(n log2 n) query time. Finally, we present anO(n logn)-size data structure for counting how many points fall inside a query tetrahedron in 3-space inO(n 2/3 log2 n) query time. All the algorithms are optimal within polylogarithmic factors. In all cases, the preprocessing can be done in polynomial time. Furthermore, the algorithms can also handle reporting within the same complexity (adding the size of the output as a linear term to the query time).Portions of this work have appeared in preliminary form in Partition trees for triangle counting and other range searching problems (E. Welzl),Proc. 4th Ann. ACM Symp. Comput. Geom. (1988), 23–33, and Tight Bounds on the Stabbing Number of Spanning Trees in Euclidean Space (B. Chazelle), Comput. Sci. Techn. Rep. No. CS-TR-155-88, Princeton University, 1988. Bernard Chazelle acknowledges the National Science Foundation for supporting this research in part under Grant CCR-8700917. Emo Welzl acknowledges the Deutsche Forschungsgemeinschaft for supporting this research in part under Grant We 1265/1-1.  相似文献   

18.
In this paper, we present parallel quicksort algorithms running inO((n/p+logp) logn) expected time andO((n/p+logp+log logn) logn) deterministic time respectively, and both withO(n) space by usingp processors on EREW PRAM. Whenp=O(n/logn), the cost is optimal, in terms of the product of time and number of processors. These algorithms can be used to obtain parallel algorithms for constructing balanced binary search trees without using sorting algorithms. One of our quicksort algorithms leads to a parallel quickhull algorithm on EREW PRAM.The work of this author was partially supported by a fellowship from the College of Science, Old Dominion University, Norfolk, VA 23529, USA.  相似文献   

19.
Algorithms are developed for determining if a set of polyhedral objects inR 3 can be intersected by a common transversal (stabbing) line. It can be determined inO(n logn) time if a set ofn line segments in space has a line transversal, and such a transversal can be found in the same time bound. For a set of polyhedra with a total ofn vertices, we give anO(n 4 logn) algorithm for determining the existence of, and computing, a line transversal. Helly-type theorems for lines and segments are also given. In particular, it is shown that if every six of a set of lines in space are intersected by a common transversal, then the entire set has a common transversal.  相似文献   

20.
A pointp i=(x i, yi) in thex–y plane ismaximal if there is no pointp j=(x j, yj) such thatx j>xi andy j>yi. We present a simple data structure, a dynamic contour search tree, which contains all the points in the plane and maintains an embedded linked list of maximal points so thatm maximal points are accessible inO(m) time. Our data structure dynamically maintains the set of points so that insertions takeO(logn) time, a speedup ofO(logn) over previous results, and deletions takeO((logn)2) time.The research of the first author was partially supported by the National Science Foundation under Grant No. DCR-8320214 and by the Office of Naval Research on Contract No. N 00014-86-K-0689. The research of the second author was partially supported by the Office of Naval Research on Contract No. N 00014-86-K-0689.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号