首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
In this paper we develop some new data structures for storing a set of disks that can answer different types of intersection queries efficiency. If the disks are non-intersecting we obtain a linear size data structure that can report allk disks intersecting a query line segment in timeO(n + +k), wheren is the number of disks,=log2(1+5)–1 0.695, and is an arbitrarily small positive constant. If the segment is a full line, the query time becomesO(n +k). For intersecting disks we obtain anO(n logn) size data structure that can answer an intersection query in timeO(n 2/3 log2 n+k). We also present a linear size data structure for ray shooting queries, whose query time isO(n ).The research of the first two authors was supported by the ESPRIT Basic Research Action No. 3075 (project ALCOM). The work of the third author was supported byDimacs (Center for Discrete Mathematics and Theoretical Computer Science), a National Science Foundation Science and Technology Center — NSF-STC88-09648.  相似文献   

2.
In this paper we study dynamic variants of conjugation trees and related structures that have recently been introduced for performing various types of queries on sets of points and line segments, like half-planar range searching, shooting, intersection queries, etc. For most of these types of queries dynamic structures are obtained with an amortized update time ofO(log2 n) (or less) with only minor increases in query times. As an application of the method we obtain an output-sensitive method for hidden surface removal in a set ofn triangles that runs in timeO(nlogn+n · k ) where=log2((1+5)/2) 0.695 andk is the size of the visibility map obtained.Research of the second author was partially supported by the ESPRIT II Basic Research Actions Program of the EC, under contract No. 3075 (project ALCOM).  相似文献   

3.
We present a parallel algorithm for finding the convex hull of a sorted set of points in the plane. Our algorithm runs inO(logn/log logn) time usingO(n log logn/logn) processors in theCommon crcw pram computational model, which is shown to be time and cost optimal. The algorithm is based onn 1/3 divide-and-conquer and uses a simple pointer-based data structure.Part of this work was done when the last three authors were at the Department of Computer and Information Science, Linköping University. The research of the second author was supported by the Academy of Finland.  相似文献   

4.
In this paper, we describe an algorithm to stably sort an array ofn elements using only a linear number of data movements and constant extra space, albeit in quadratic time. It was not known previously whether such an algorithm existed. When the input contains only a constant number of distinct values, we present a sequence ofin situ stable sorting algorithms makingO(n lg(k+1) n+kn) comparisons (lg(K) means lg iteratedk times and lg* the number of times the logarithm must be taken to give a result 0) andO(kn) data movements for any fixed valuek, culminating in one that makesO(n lg*n) comparisons and data movements. Stable versions of quicksort follow from these algorithms.Research supported by Natural Sciences and Engineering Research Council of Canada grant No.A-8237 and the Information Technology Research Centre of Ontario.Supported in part by a Research Initiation Grant from the Virginia Engineering Foundation.  相似文献   

5.
In this paper, we present parallel quicksort algorithms running inO((n/p+logp) logn) expected time andO((n/p+logp+log logn) logn) deterministic time respectively, and both withO(n) space by usingp processors on EREW PRAM. Whenp=O(n/logn), the cost is optimal, in terms of the product of time and number of processors. These algorithms can be used to obtain parallel algorithms for constructing balanced binary search trees without using sorting algorithms. One of our quicksort algorithms leads to a parallel quickhull algorithm on EREW PRAM.The work of this author was partially supported by a fellowship from the College of Science, Old Dominion University, Norfolk, VA 23529, USA.  相似文献   

6.
Recently, Fredman and Tarjan invented a new, especially efficient form of heap (priority queue). Their data structure, theFibonacci heap (or F-heap) supports arbitrary deletion inO(logn) amortized time and other heap operations inO(1) amortized time. In this paper we use F-heaps to obtain fast algorithms for finding minimum spanning trees in undirected and directed graphs. For an undirected graph containingn vertices andm edges, our minimum spanning tree algorithm runs inO(m logβ (m, n)) time, improved fromO((m, n)) time, whereβ(m, n)=min {i|log(i) nm/n}. Our minimum spanning tree algorithm for directed graphs runs inO(n logn + m) time, improved fromO(n log n +m log log log(m/n+2) n). Both algorithms can be extended to allow a degree constraint at one vertex. Research supported in part by National Science Foundation Grant MCS-8302648. Research supported in part by National Science Foundation Grant MCS-8303139. Research supported in part by National Science Foundation Grant MCS-8300984 and a United States Army Research Office Program Fellowship, DAAG29-83-GO020.  相似文献   

7.
In this paper, we characterize a class of graphs which can be embedded on a boolean cube. Some of the graphs in this class are identified with the well known graphs such asmulti-dimensional mesh of trees, tree of meshes, etc. We suggest (i) an embedding of anr-dimensional mesh of trees ofn r (r+1)–rn r–1 nodes on a boolean cube of (2n) r nodes, and (ii) an embedding of a tree of meshes with 2n 2 logn+n 2 nodes on a boolean cube withn 2 exp2 (log (2 logn+1)]) nodes.  相似文献   

8.
A new algorithm for rearranging a heap is presented and analysed in the average case. The average case upper bound for deleting the maximum element of a random heap is improved, and is shown to be less than [logn]+0.299+M(n) comparisons, *) whereM(n) is between 0 and 1. It is also shown that a heap can be constructed using 1.650n+O(logn) comparisons with this algorithm, the best result for any algorithm which does not use any extra space. The expected time to sortn elements is argued to be less thann logn+0.670n+O(logn), while simulation result points at an average case ofn log n+0.4n which will make it the fastest in-place sorting algorithm. The same technique is used to show that the average number of comparisons when deleting the maximum element of a heap using Williams' algorithm for rearrangement is 2([logn]–1.299+L(n)) whereL(n) also is between 0 and 1, and the average cost for Floyd-Williams Heapsort is at least 2nlogn–3.27n, counting only comparisons. An analysis of the number of interchanges when deleting the maximum element of a random heap, which is the same for both algorithms, is also presented.  相似文献   

9.
A well-known simple heuristic algorithm for solving the all-nearest-neighbors problem in thek-dimensional Euclidean spaceE k ,k>1, projects the given point setS onto thex-axis. For each pointq S a nearest neighbor inS under anyL p -metric (1 p ) is found by sweeping fromq into two opposite directions along thex-axis. If q denotes the distance betweenq and its nearest neighbor inS the sweep process stops after all points in a vertical 2 q -slice centered aroundq have been examined. We show that this algorithm solves the all-nearest-neighbors problem forn independent and uniformly distributed points in the unit cube [0,1] k in (n 2–1/k ) expected time, while its worst-case performance is (n 2).  相似文献   

10.
We design two variants of partition trees, calledsegment partition trees andinterval partition trees, that can be used for storing arbitrarily oriented line segments in the plane in an efficient way. The raw structures useO(n logn) andO(n) storage, respectively, and their construction time isO(n logn). In our applications we augment these structures by certain (simple) auxiliary structures, which may increase the storage and preprocessing time by a polylogarithmic factor. It is shown how to use these structures for solving line segment intersection queries, triangle stabbing queries and ray shooting queries in reasonably efficient ways. If we use the conjugation tree as the underlying partition tree, the query time for all problems isO(n ), where=log2(1+5)–10.695. The techniques are fairly simple and easy to understand.Research of the first author was partially supported by the ESPRIT II Basic Research Action of the EC under contract No. 3075 (Project ALCOM).Work by the third author has been supported in part by Office of Naval Research Grant N00014-87-K-0129, by National Science Foundation Grants DCR-83-20085 and CCR-89-01484, and by grants from the Digital Equipment Corporation, the IBM Corporation, the U.S.-Israeli Binational Science Foundation, the NCRD — the Israeli National Council for Research and Development, and the Fund for Basic Research in Electronics, Computers and Communication, administered by the Israeli Academy of Sciences.  相似文献   

11.
One of the fundamental algorithmic problems in computer science involves selecting thekth smallest element in a setS ofn elements. In this paper we present a fast selection algorithm which runs inO(n 1/8 logn) time on a mesh with multiple broadcasting of sizen 3/8 ×n 5/8. Our result shows that, just like semigroup computations, selection can be done much faster on a suitably chosen rectangular mesh than on square meshes. We also show that if every processor can storen 1/9 items, then our selection algorithm runs inO(n 1/9 logn) time on a mesh with multiple broadcasting of sizen 1/3 ×n 5/9.Work supported by NASA under grant NCC1-99.This author was partly supported by NSF grant CCR-8009996.  相似文献   

12.
Computing a maximum independent set, weighted or unweighted, isNP-hard for general as well as planar graphs. However, polynomial time algorithms do exist for solving this problem on special classes of graphs. In this paper we present an efficient algorithm for computing a maximum weight independent set in trees. A divide and conquer approach based on centroid decomposition of trees is used to compute a maximum weight independent set withinO(n logn) time, wheren is the number of vertices in the tree. We introduce a notion of analternating tree which is crucial in obtaining a new independent set from the previous one.  相似文献   

13.
A VLSI sorter of sizeO(n) can sortn elements in linear time when the input and output time are taken into account. If the input contains more thann elements, some prepocessing has to be performed. A VLSI partition algorithm that provides a solution to this problem is presented. The algorithm partitions the input data into two smaller parts as the quicksort algorithm does. That is, the elements of the first part will be smaller than the elements of the second part. The partition is repeated until the parts are small enough to fit in the sorter. It is shown that the average number of times each element must go through the partitioner isO(logk) for a data file of sizekn wheren is the size of the sorter. In the worst case where the partitioner fails to divide the input evenly, the elements must goO(k) times through the partitioner like in the quicksort algorithm. The partitioner can also be used, with simple modifications, as a sorter, a stack, a queue, or as a priority queue. Other advantages of the VLSI algorithm are also discussed.  相似文献   

14.
The motivating problem for this paper is to find the expected covering time of a random walk on a balanced binary tree withn vertices. Previous upper bounds for general graphs ofO(|V| |E|)(1) andO(|V| |E|/d min)(2) imply an upper bound ofO(n 2). We show an upper bound on general graphs ofO( |E| log |V|), which implies an upper bound ofO(n log2 n). The previous lower bound was (|V| log |V|) for trees.(2) In our main result, we show a lower bound of (|V| (log d max |V|)2) for trees, which yields a lower bound of (n log2 n). We also extend our techniques to show an upper bound for general graphs ofO(max{E Ti} log |V|).  相似文献   

15.
The range-searching problems that allow efficient partition trees are characterized as those defined by range spaces of finite Vapnik-Chervonenkis dimension. More generally, these problems are shown to be the only ones that admit linear-size solutions with sublinear query time in the arithmetic model. The proof rests on a characterization of spanning trees with a low stabbing number. We use probabilistic arguments to treat the general case, but we are able to use geometric techniques to handle the most common range-searching problems, such as simplex and spherical range search. We prove that any set ofn points inE d admits a spanning tree which cannot be cut by any hyperplane (or hypersphere) through more than roughlyn 1–1/d edges. This result yields quasi-optimal solutions to simplex range searching in the arithmetic model of computation. We also look at polygon, disk, and tetrahedron range searching on a random access machine. Givenn points inE 2, we derive a data structure of sizeO(n logn) for counting how many points fall inside a query convexk-gon (for arbitrary values ofk). The query time isO(kn logn). Ifk is fixed once and for all (as in triangular range searching), then the storage requirement drops toO(n). We also describe anO(n logn)-size data structure for counting how many points fall inside a query circle inO(n log2 n) query time. Finally, we present anO(n logn)-size data structure for counting how many points fall inside a query tetrahedron in 3-space inO(n 2/3 log2 n) query time. All the algorithms are optimal within polylogarithmic factors. In all cases, the preprocessing can be done in polynomial time. Furthermore, the algorithms can also handle reporting within the same complexity (adding the size of the output as a linear term to the query time).Portions of this work have appeared in preliminary form in Partition trees for triangle counting and other range searching problems (E. Welzl),Proc. 4th Ann. ACM Symp. Comput. Geom. (1988), 23–33, and Tight Bounds on the Stabbing Number of Spanning Trees in Euclidean Space (B. Chazelle), Comput. Sci. Techn. Rep. No. CS-TR-155-88, Princeton University, 1988. Bernard Chazelle acknowledges the National Science Foundation for supporting this research in part under Grant CCR-8700917. Emo Welzl acknowledges the Deutsche Forschungsgemeinschaft for supporting this research in part under Grant We 1265/1-1.  相似文献   

16.
In physical VLSI design, network design (wiring) is the most time-consuming phase. For solving global wiring problems, we propose to first compute from the layout geometry a graph that preserves all shortest paths between pairs of relevant points, and then to operate on that graph for computing shortest paths, Steiner minimal tree approximations, or the like. For a set of points and a set of simple orthogonal polygons as obstacles in the plane, withn input points (polygon corner or other) altogether, we show how a shortest paths preserving graph of sizeO(n logn) can be computed in timeO(n logn) in the worst case, with spaceO(n). We illustrate the merits of this approach with a simple example: If the length of a longest edge in the graph is bounded by a polynomial inn, an assumption that is clearly fulfilled for graphs derived from VLSI layout geometries, then a shortest path can be computed in timeO(n logn log logn) in the worst case; this result improves on the known best one ofO(n(logn)3/2).  相似文献   

17.
This paper presents fast parallel algorithms for the following graph theoretic problems: breadth-depth search of directed acyclic graphs; minimum-depth search of graphs; finding the minimum-weighted paths between all node-pairs of a weighted graph and the critical activities of an activity-on-edge network. The first algorithm hasO(logdlogn) time complexity withO(n 3) processors and the remaining algorithms achieveO(logd loglogn) time bound withO(n 2[n/loglogn]) processors, whered is the diameter of the graph or the directed acyclic graph (which also represents an activity-on-edge network) withn nodes. These algorithms work on an unbounded shared memory model of the single instruction stream, multiple data stream computer that allows both read and write conflicts.  相似文献   

18.
We give a generalization of the hypergreedy algorithm for minimum weight perfect matching on a complete edge weighted graph whose weights satisfy the triangle inequality. With a modified version of this algorithm we obtain a logn-approximate perfect matching heuristic for points in the Euclidean plane, inO(n log2 n) time.This research was supported in part by the DIMACS Grant No. NSF-STC88-09648.This research was supported in part by the NSF under Grant No. CCR 88-07518.  相似文献   

19.
Summary Nested dissection is an algorithm invented by Alan George for preserving sparsity in Gaussian elimination on symmetric positive definite matrices. Nested dissection can be viewed as a recursive divide-and-conquer algorithm on an undirected graph; it usesseparators in the graph, which are small sets of vertices whose removal divides the graph approximately in half. George and Liu gave an implementation of nested dissection that used a heuristic to find separators. Lipton and Tarjan gave an algorithm to findn 1/2-separators in planar graphs and two-dimensional finite element graphs, and Lipton, Rose, and Tarjan used these separators in a modified version of nested dissection, guaranteeing bounds ofO (n logn) on fill andO(n 3/2) on operation count. We analyze the combination of the original George-Liu nested dissection algorithm and the Lipton-Tarjan planar separator algorithm. This combination is interesting because it is easier to implement than the Lipton-Rose-Tarjan version, especially in the framework of existïng sparse matrix software. Using some topological graph theory, we proveO(n logn) fill andO(n 3/2) operation count bounds for planar graphs, twodimensional finite element graphs, graphs of bounded genus, and graphs of bounded degree withn 1/2-separators. For planar and finite element graphs, the leading constant factor is smaller than that in the Lipton-Rose-Tarjan analysis. We also construct a class of graphs withn 1/2-separators for which our algorithm does not achieve anO(n logn) bound on fill.The work of this author was supported in part by the Hertz Foundation under a graduate fellowship and by the National Science Foundation under Grant MCS 82-02948The work of this author was supported in part by the National Science Foundation under Grant MCS 78-26858 and by the Office of Naval Research under Contract N00014-76-C-0688  相似文献   

20.
Efficient parallel algorithms are presented, on the CREW PRAM model, for generating a succinct encoding of all pairs shortest path information in a directed planar graphG with real-valued edge costs but no negative cycles. We assume that a planar embedding ofG is given, together with a set ofq faces that cover all the vertices. Then our algorithm runs inO(log2 n) time and employsO(nq+M(q)) processors (whereM(t) is the number of processors required to multiply twot×t matrices inO(logt) time). Let us note here that wheneverq<n then our processor bound is better than the best previous one (M(n)).O(log2 n) time,n-processor algorithms are presented for various subproblems, including that of generating all pairs shortest path information in a directedouterplanar graph. Our work is based on the fundamental hammock-decomposition technique of G. Frederickson. We achieve this decomposition inO(logn log*n) parallel time by usingO(n) processors. The hammock-decomposition seems to be a fundamental operation that may help in improving efficiency of many parallel (and sequential) graph algorithms.This work was partially supported by the EEC ESPRIT Basic Research Action No. 3075 (ALCOM) and by the Ministry of Industry, Energy and Technology of Greece.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号