首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
In this paper, sequential and parallel algorithms are presented to find a maximum independent set with largest weight in a weighted permutation graph. The sequential algorithm, which is designed based on dynamic programming, runs in timeO(nlogn) and requiresO(n) space. The parallel algorithm runs inO(log2 n) time usingO(n 3/logn) processors on the CREW PRAM, orO(logn) time usingO(n 3) processors on the CRCW PRAM.  相似文献   

2.
A boundO(N 1+1/k ) for the running time of Shellsort, withO(logN) passes, is proved very simply by application of a Frobenius basis withk elements.  相似文献   

3.
In this paper, parallel algorithms are presented for solving some problems on permutation graphs. The coloring problem is solved inO(log2 n) time usingO(n 3/logn) processors on the CREW PRAM, orO(logn) time usingO(n 3) processors on the CRCW PRAM. The weighted clique problem, the weighted independent set problem, the cliques cover problem, and the maximal layers problem are all solved with the same complexities. We can also show that the longest common subsequence problem belongs to the class NC.  相似文献   

4.
Split trees are suitable data structures for storing records with different access frequencies. Under assumption that the access frequencies are all distinct, Huang has proposed anO(n 4 logm) time algorithm to construct an (m+1)-way split tree for a set ofn keys. In this paper, we generalize Huang's algorithm to deal with the case of non-distinct access frequencies. The technique used in the generalized algorithm is a generalization of Hesteret al.'s, where the binary case was considered. The generalized algorithm runs inO(n 5 logm) time.  相似文献   

5.
We present a parallel algorithm for finding the convex hull of a sorted set of points in the plane. Our algorithm runs inO(logn/log logn) time usingO(n log logn/logn) processors in theCommon crcw pram computational model, which is shown to be time and cost optimal. The algorithm is based onn 1/3 divide-and-conquer and uses a simple pointer-based data structure.Part of this work was done when the last three authors were at the Department of Computer and Information Science, Linköping University. The research of the second author was supported by the Academy of Finland.  相似文献   

6.
We introduce the notion ofsearchability as a property of an in place merging algorithm. We show that a pair of sorted arrays can be merged in place in linear time, so that a search can be performed in logarithmic time at any point during the merging process. We apply this method to devise an implicit data structure which can support searches inO(log2 n) time in the worst case, andO(logn) on the average, and insertions inO(logn) time, in the worst case.This research was partly supported by NSERC under grant A8237 and presented in preliminary form at the 10th International Colloquium on Automata, Languages and Programming.On leave from the University of Chile.  相似文献   

7.
A new algorithm for rearranging a heap is presented and analysed in the average case. The average case upper bound for deleting the maximum element of a random heap is improved, and is shown to be less than [logn]+0.299+M(n) comparisons, *) whereM(n) is between 0 and 1. It is also shown that a heap can be constructed using 1.650n+O(logn) comparisons with this algorithm, the best result for any algorithm which does not use any extra space. The expected time to sortn elements is argued to be less thann logn+0.670n+O(logn), while simulation result points at an average case ofn log n+0.4n which will make it the fastest in-place sorting algorithm. The same technique is used to show that the average number of comparisons when deleting the maximum element of a heap using Williams' algorithm for rearrangement is 2([logn]–1.299+L(n)) whereL(n) also is between 0 and 1, and the average cost for Floyd-Williams Heapsort is at least 2nlogn–3.27n, counting only comparisons. An analysis of the number of interchanges when deleting the maximum element of a random heap, which is the same for both algorithms, is also presented.  相似文献   

8.
We present an algorithm to compute a Euclidean minimum spanning tree of a given setS ofN points inE d in timeO(F d (N,N) log d N), whereF d (n,m) is the time required to compute a bichromatic closest pair amongn red andm green points inE d . IfF d (N,N)=Ω(N 1+ε), for some fixed ɛ>0, then the running time improves toO(F d (N,N)). Furthermore, we describe a randomized algorithm to compute a bichromatic closest pair in expected timeO((nm logn logm)2/3+m log2 n+n log2 m) inE 3, which yields anO(N 4/3 log4/3 N) expected time, algorithm for computing a Euclidean minimum spanning tree ofN points inE 3. Ind≥4 dimensions we obtain expected timeO((nm)1−1/([d/2]+1)+ε+m logn+n logm) for the bichromatic closest pair problem andO(N 2−2/([d/2]+1)ε) for the Euclidean minimum spanning tree problem, for any positive ɛ. The first, second, and fourth authors acknowledge support from the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS), a National Science Foundation Science and Technology Center under NSF Grant STC 88-09648. The second author's work was supported by the National Science Foundation under Grant CCR-8714565. The third author's work was supported by the Deutsche Forschungsgemeinschaft under Grant A1 253/1-3, Schwerpunktprogramm “Datenstrukturen und effiziente Algorithmen”. The last two authors' work was also partially supported by the ESPRIT II Basic Research Action of the EC under Contract No. 3075 (project ALCOM).  相似文献   

9.
An algorithm for finding a polygon with minimum number of edges nested in two simplen-sided polygons is presented. The algorithm solves the problem in at mostO(n logn) time, and improves the time complexity of two previousO(n 2) algorithms.The work was supported by NSERC grant OPG0041629.  相似文献   

10.
In this paper, we present parallel quicksort algorithms running inO((n/p+logp) logn) expected time andO((n/p+logp+log logn) logn) deterministic time respectively, and both withO(n) space by usingp processors on EREW PRAM. Whenp=O(n/logn), the cost is optimal, in terms of the product of time and number of processors. These algorithms can be used to obtain parallel algorithms for constructing balanced binary search trees without using sorting algorithms. One of our quicksort algorithms leads to a parallel quickhull algorithm on EREW PRAM.The work of this author was partially supported by a fellowship from the College of Science, Old Dominion University, Norfolk, VA 23529, USA.  相似文献   

11.
One of the fundamental algorithmic problems in computer science involves selecting thekth smallest element in a setS ofn elements. In this paper we present a fast selection algorithm which runs inO(n 1/8 logn) time on a mesh with multiple broadcasting of sizen 3/8 ×n 5/8. Our result shows that, just like semigroup computations, selection can be done much faster on a suitably chosen rectangular mesh than on square meshes. We also show that if every processor can storen 1/9 items, then our selection algorithm runs inO(n 1/9 logn) time on a mesh with multiple broadcasting of sizen 1/3 ×n 5/9.Work supported by NASA under grant NCC1-99.This author was partly supported by NSF grant CCR-8009996.  相似文献   

12.
We establish an O(nlog2n) upper bound on the time for deterministic distributed broadcasting in multi-hop radio networks with unknown topology. This nearly matches the known lower bound of Ω(nlogn). The fastest previously known algorithm for this problem works in time O(n3/2). Using our broadcasting algorithm, we develop an O(n3/2log2n) algorithm for gossiping in the same network model.  相似文献   

13.
An efficient and numerically correct program called FastHull for computing the convex hulls of finite point sets in the plane is presented. It is based on the Akl-Toussaint algorithm and the MergeHull algorithm. Numerical correctness of the FastHull procedure is ensured by using special routines for interval arithmetic and multiple precision arithmetic. The FastHull algorithm guaranteesO(N logN) running time in the worst case and has linear time performance for many kinds of input patterns. It appears that the FastHull algorithm runs faster than any currently known 2D convex hull algorithm for many input point patterns.  相似文献   

14.
We give a generalization of the hypergreedy algorithm for minimum weight perfect matching on a complete edge weighted graph whose weights satisfy the triangle inequality. With a modified version of this algorithm we obtain a logn-approximate perfect matching heuristic for points in the Euclidean plane, inO(n log2 n) time.This research was supported in part by the DIMACS Grant No. NSF-STC88-09648.This research was supported in part by the NSF under Grant No. CCR 88-07518.  相似文献   

15.
In this paper we consider theSteiner multicutproblem. This is a generalization of the minimum multicut problem where instead of separating nodepairs, the goal is to find a minimum weight set of edges that separates all givensetsof nodes. A set is considered separated if it is not contained in a single connected component. We show anO(log3(kt)) approximation algorithm for the Steiner multicut problem, wherekis the number of sets andtis the maximum cardinality of a set. This improves theO(t log k) bound that easily follows from the previously known multicut results. We also consider an extension of multicuts to directed case, namely the problem of finding a minimum-weight set of edges whose removal ensures that none of the strongly connected components includes one of the prespecifiedknode pairs. In this paper we describe anO(log2 k) approximation algorithm for this directed multicut problem. Ifk ? n, this represents an improvement over theO(log n log log n) approximation algorithm that is implied by the technique of Seymour.  相似文献   

16.
Let a given collection of sets have size N measured by the sum of the cardinalities. Yellin and Jutla presented an algorithm which constructed the partial order induced by the subset relation (a “subset graph”) in a claimed O(N2/log N) operations over a dictionary ADT, and exhibited a collection whose subset graph had Θ(N2/log2 N) edges. This paper first establishes a matching upper bound on the number of edges in a subset graph. It also presents a finer analysis of the algorithm, which confirms the claimed upper bound and shows it to be tight. A simple implementation requiring O(1) bit-parallel operations per ADT operation is presented, along with a variant of the algorithm with an implementation requiring O(N2/log N) RAM operations.  相似文献   

17.
We consider the following problem: given a rectangle containingn points, find the largest perimeter subrectangle whose sides are parallel to those of the original rectangle, whose aspect ratio is below a given bound, and which does not contain any of the given points. Chazelle, Drysdale and Lee have studied a variant of this problem with areas as the quantity to be maximized. They gave anO(nlog3 n) algorithm for that problem. We adopt a similar divide-and-conquer approach and are able to use the simpler properties of the perimeter measure to obtain anO(nlog2 n) algorithm for our problem.The work of the first author was supported by the Academy of Finland and that of the second by the Natural Sciences and Engineering Research Council of Canada Grant No. A-5692.  相似文献   

18.
AnO(n logn) divide-and-conquer algorithm for finding the relative neighborhood graph RNG(V) of a set V ofn points in Euclidean space is presented. If implemented in parallel, its time complexity isO(n) and it requiresO(logn) processors.  相似文献   

19.
The syntenic distance between two genomes is given by the minimum number of fusions, fissions, and translocations required to transform one into the other, ignoring the order of genes within chromosomes. Computing this distance is NP-hard. In the present work, we give a tight connection between syntenic distance and the incomplete gossip problem, a novel generalization of the classical gossip problem. In this problem, there are n gossipers, each with a unique piece of initial information; they communicate by phone calls in which the two participants exchange all their information. The goal is to minimize the total number of phone calls necessary to inform each gossiper of his set of relevant gossip which he desires to learn. As an application of the connection between syntenic distance and incomplete gossip, we derive an O(2O(nlogn)) algorithm to exactly compute the syntenic distance between two genomes with at most n chromosomes each. Our algorithm requires O(n2+2O(dlogd)) time when this distance is d, improving the O(n2+2O(d2)) running time of the best previous exact algorithm.  相似文献   

20.
We present a new algorithm for the Hitchcock transportation problem. On instances with n sources and k sinks, our algorithm has a worst-case running time of O(nk2(logn+klogk)). It closes a gap between algorithms with running time linear in n but exponential in k and a polynomial-time algorithm with running time O(nk2log2n).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号