首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the following problem: given a rectangle containingn points, find the largest perimeter subrectangle whose sides are parallel to those of the original rectangle, whose aspect ratio is below a given bound, and which does not contain any of the given points. Chazelle, Drysdale and Lee have studied a variant of this problem with areas as the quantity to be maximized. They gave anO(nlog3 n) algorithm for that problem. We adopt a similar divide-and-conquer approach and are able to use the simpler properties of the perimeter measure to obtain anO(nlog2 n) algorithm for our problem.The work of the first author was supported by the Academy of Finland and that of the second by the Natural Sciences and Engineering Research Council of Canada Grant No. A-5692.  相似文献   

2.
A parallel algorithm to generate allD n derangements ofn distinct elements is presented in this paper. The algorithm requiresO([D n /P]nlogn) time whenP processors are available on a Single Instruction Multiple Data Stream (SIMD) computer.  相似文献   

3.
De Graaf and Kosters have studied the expected height of thekth element in a heap. They conjecture that, for largek, this is asymptotic to log2 k + 0.72 .... We show that the height of thekth element is related to the depth of insertion in a digital search tree, and use this relation to prove their conjecture.This research was supported by grant FONDECYT (Chile) 91-1252.  相似文献   

4.
The problem of file organization which we consider involves altering the placement of records on pages of a secondary storage device. In addition, we want this reorganization to be done in-place, i.e., using the file's original storage space for the newly reorganized file. The motivation for such a physical change is to improve the database system's performance. For example, by placing frequently and jointly accessed records on the same page or pages, we can try to minimize the number of page accesses made in answering a set of queeries. The optimal assignment (or reassignment) of records to clusters is exactly what record clustering algorithms attempt to do. However, record clustering algorithms usually do not solve the entire problem, i.e., they do not specify how to efficiently reorganize the file to reflect the clustering assignment which they determine. Our algorithm is a companion to general record clustering algorithms since it actually transforms the file. The problem of optimal file reorganization isNP-hard. Consequently, our reorganization algorithm is based on heuristics. The algorithm's time and space requirements are reasonable and its solution is near optimal. In addition, the reorganization problem which we consider in this paper is similar to the problem of join processing when indexes are used.The research of this author was partially supported by the National Science Foundation under grant IST-8696157.  相似文献   

5.
In this paper, we develop a sequential algorithm for the maximum matching problem on cographs. The input is a parse tree of some cograph. The time complexity of our algorithm is linear.Supported by National Science Council of Taiwan, R.O.C. under Grant NSC81-0415-E-005-03.  相似文献   

6.
We present a universally applicable algorithm for generating minimal perfect hashing functions. The method has (worst case) polynomial time complexity in units of bit operations. An adjunct algorithm for reducing parameter magnitudes in the generated hash functions is given. This probabilistic method makes hash function parameter magnitudes independent of argument (input key) magnitudes.  相似文献   

7.
In this paper we show that to construct an implicit, double-ended priority queue organized as a min-max heap, 17/9n = 1.88 ...n comparisons suffice in the worst case (neglectng lower order terms). The algorithm improves the previously best known upper bound of 2.15 ...n comparisons.Some of the ideas given in this paper were presented in a preliminary form at the 1st Scandinavian Workshop on Algorithm Theory, Halmstad (July, 1988) [7].  相似文献   

8.
This paper derives an upper bound for the speedup obtainable by any parallel branch-and-bound algorithm using the best-bound search strategy. We confirm that parallel branch-and-bound can achieve nearly linear, or even super-linear, speedup under the appropriate conditions.This work was supported by U.S. Army Research Office grant DAAG29-82-K-0107.  相似文献   

9.
A parallel version of the double distributive partitioning sorting algorithm is described, and two versions of this algorithm are compared on a MIMD shared memory multiprocessor. A reclaimer version, in which child processes completely build their page tables before starting useful work, exhibitsO(n/p) expected-case time complexity for a wide class of distributions.Work performed under the auspices of Lockheed Independent Research Grant 86RDD502.  相似文献   

10.
Two methods are given for constructing total exchange algorithms for hypercubic processor networks. This is done by means of bit sequences with special properties. The algorithms are optimal with respect to a given time model, need no intermediate message buffering and are local in the sense that every processor executes basically the same program.  相似文献   

11.
This paper presents fast parallel algorithms for the following graph theoretic problems: breadth-depth search of directed acyclic graphs; minimum-depth search of graphs; finding the minimum-weighted paths between all node-pairs of a weighted graph and the critical activities of an activity-on-edge network. The first algorithm hasO(logdlogn) time complexity withO(n 3) processors and the remaining algorithms achieveO(logd loglogn) time bound withO(n 2[n/loglogn]) processors, whered is the diameter of the graph or the directed acyclic graph (which also represents an activity-on-edge network) withn nodes. These algorithms work on an unbounded shared memory model of the single instruction stream, multiple data stream computer that allows both read and write conflicts.  相似文献   

12.
An arborescence of a multihop radio network is a directed spanning tree (with rootx) such that the edges are directed away from the root. Based upon an arborescence,x canbroadcast a message to other nodes according to the directed edges of the spanning tree. The minimum transmission power arborescence problem is to find an arborescence such that the message can be broadcasted to other nodes by using a minimal amount of transmission power. The minimum delay arborescence problem is to find an arborescence such that a message can be broadcasted to other nodes by using a minimal number of broadcast transmission. In this paper we show that both these problems areNP-complete. The reductions are from the maximum leaf spanning tree problem.Areverse arborescence is similar to an arborescence except that the edges are directed toward the root. Based upon a reverse arborescence, the root node cancollect information from other nodes. In this paper we also show that the reverse minimum transmission power arborescence problem can be solved with the same computational complexity as that of finding a minimum cost spanning tree, and the reverse minimum delay arborescence problem can be solved with the same computational complexity as that of finding a spanning tree.  相似文献   

13.
In this paper, we present parallel quicksort algorithms running inO((n/p+logp) logn) expected time andO((n/p+logp+log logn) logn) deterministic time respectively, and both withO(n) space by usingp processors on EREW PRAM. Whenp=O(n/logn), the cost is optimal, in terms of the product of time and number of processors. These algorithms can be used to obtain parallel algorithms for constructing balanced binary search trees without using sorting algorithms. One of our quicksort algorithms leads to a parallel quickhull algorithm on EREW PRAM.The work of this author was partially supported by a fellowship from the College of Science, Old Dominion University, Norfolk, VA 23529, USA.  相似文献   

14.
This paper is concerned with the allocation of multi-attribute records on several disks so as to achieve high degree of concurrency of disk access when responding to partial match queries.An algorithm to distribute a set of multi-attribute records onto different disks is presented. Since our allocation method will use the principal component analysis, this concept is first introduced. We then use it to generate a set of real numbers which are the projections on the first principal component direction and can be viewed as hashing addresses.Then we propose an algorithm based upon these hashing addresses to allocate multi-attribute records onto different disks. Some experimental results show that our method can indeed be used to solve the multi-disk data allocation problem for concurrent accessing.  相似文献   

15.
In this paper, we present a method, called the partition data allocation method, to allocate a set of data into a multi-disk system. Using the partition data allocation method, we shall show that, in a two-disk system, the performance is influenced by how we disperse similar data onto different disks.  相似文献   

16.
The maximum weight independent set problem for a general graph is NP-hard. But for some special classes of graphs, polynomial time algorithms do exist for solving it. Based on the divide-and-conquer strategy, Pawagi has presented anO(|V|log|V|) time algorithm for solving this problem on a tree. In this paper, we propose anO(|V|) time algorithm to improve Pawagi's result. The proposed algorithm is based on the dynamic programming strategy and is time optimal within a constant factor.  相似文献   

17.
Computing a maximum independent set, weighted or unweighted, isNP-hard for general as well as planar graphs. However, polynomial time algorithms do exist for solving this problem on special classes of graphs. In this paper we present an efficient algorithm for computing a maximum weight independent set in trees. A divide and conquer approach based on centroid decomposition of trees is used to compute a maximum weight independent set withinO(n logn) time, wheren is the number of vertices in the tree. We introduce a notion of analternating tree which is crucial in obtaining a new independent set from the previous one.  相似文献   

18.
Approximating maximum independent sets by excluding subgraphs   总被引:5,自引:0,他引:5  
An approximation algorithm for the maximum independent set problem is given, improving the best performance guarantee known toO(n/(logn)2). We also obtain the same performance guarantee for graph coloring. The results can be combined into a surprisingly strongsimultaneous performance guarantee for the clique and coloring problems.The framework ofsubgraph-excluding algorithms is presented. We survey the known approximation algorithms for the independent set (clique), coloring, and vertex cover problems and show how almost all fit into that framework. We show that among subgraph-excluding algorithms, the ones presented achieve the optimal asymptotic performance guarantees.A preliminary version of this paper appeared in [9].Supported in part by National Science Foundation Grant CCR-8902522 and PYI Award CCR-9057488.Research done at Rutgers University. Supported in part by Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) fellowship.  相似文献   

19.
We give a generalization of the hypergreedy algorithm for minimum weight perfect matching on a complete edge weighted graph whose weights satisfy the triangle inequality. With a modified version of this algorithm we obtain a logn-approximate perfect matching heuristic for points in the Euclidean plane, inO(n log2 n) time.This research was supported in part by the DIMACS Grant No. NSF-STC88-09648.This research was supported in part by the NSF under Grant No. CCR 88-07518.  相似文献   

20.
A general sorting algorithm, having the well knownO(n 2) algorithmsStraight Insertion Sort andSelection Sort as special cases, is described. This algorithm is analyzed in the case that certain choices in the algorithm are done randomly, and this yields an algorithm that has an average complexity ofO(n 1.5) and a worst case complexity ofO(n 2). However, making random choices (by some random number generator) is time consuming, and a simple quasi-random scheme is therefore implemented. Test runs indicate that also this version has average complexity ofO(n 1.5), and it even seems to perform better than the version using real random choices (even if we disregard the time used for the random choices). This version also needs very little administrative overhead, and it therefore compares favourably to many other sorting algorithms for small and medium sized arrays.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号