首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A synchronized parallel algorithm for finding maximum flow in a directed flow network is presented. Its depth is O(n3 (log n)p), where p (pn) is the number of processors used. This problem seems to be more involved than most of the problems for which efficient parallel algorithms exist. The parallel algorithm induces a new rather simple sequential O(n3) algorithm. This algorithm is very much parallel oriented. It is quite difficult to conceive and analyze it, if one is restricted to the sequential point of view.  相似文献   

2.
The phase I maximum flow and most positive cut methods are used to solve the feasibility problem. Both of these methods take one maximum flow computation. Thus the feasibility problem can be solved using maximum flow algorithms. Let n and m be the number of nodes and arcs, respectively. In this paper, we present an algorithm to solve the feasibility problem with integer lower and upper bounds. The running time of our algorithm is O(mn log (nU)), where U is the value of maximum upper bound. Our algorithm improves the O(m2 log (nU))-time algorithm in [12]. Hence the current algorithm improves the running time in [12] by a factor of n. Sleator and Goldberg’s algorithm is one of the well-known maximum flow algorithms, which runs in O(mn log n) time, see [5]. Under similarity assumption [11], our algorithm runs in O(mn log n) time, which is the running time of Sleator and Goldberg’s algorithm. The merit of our algorithm is that, in the case of infeasibility of the given network, it not only diagnoses infeasibility but also presents some information that is useful to modeler in estimating the maximum cost of adjusting the infeasible network.  相似文献   

3.
We present algorithms for maintaining data structures supporting fast (polylogarithmic) point-location and ray-shooting queries in arrangements of hyperplanes. This data structure allows for deletion and insertion of hyperplanes. Our algorithms use random bits in the construction of the data structure but do not make any assumptions about the update sequence or the hyperplanes in the input. The query bound for our data structure isÕ(polylog(n)), wheren is the number of hyperplanes at any given time, and theÕ notation indicates that the bound holds with high probability, where the probability is solely with respect to randomization in the data structure. By high probability we mean that the probability of error is inversely proportional to a large degree polynomial inn. The space requirement isÕ(n d). The cost of update isÕ(n d?1 logn. The expected cost of update isO(n d?1); the expectation is again solely with respect to randomization in the data structure. Our algorithm is extremely simple. We also give a related algorithm with optimalÕ(logn) query time, expectedO(n d) space requirement, and amortizedO(n d?1) expected cost of update. Moreover, our approach has a versatile quality which is likely to have further applications to other dynamic algorithms. Ford=2, 3 we also show how to obtain polylogarithmic update time in the CRCW PRAM model so that the processor-time product matches (within a polylogarithmic factor) the sequential update time.  相似文献   

4.
We establish new lower bounds on the complexity of the following basic geometric problem, attributed to John Hopcroft: Given a set ofn points andm hyperplanes in $\mathbb{R}^d $ , is any point contained in any hyperplane? We define a general class ofpartitioning algorithms, and show that in the worst case, for allm andn, any such algorithm requires time Ω(n logm + n 2/3m2/3 + m logn) in two dimensions, or Ω(n logm + n 5/6m1/2 + n1/2m5/6 + m logn) in three or more dimensions. We obtain slightly higher bounds for the counting version of Hopcroft's problem in four or more dimensions. Our planar lower bound is within a factor of 2O(log*(n+m)) of the best known upper bound, due to Matou?ek. Previously, the best known lower bound, in any dimension, was Ω(n logm + m logn). We develop our lower bounds in two stages. First we define a combinatorial representation of the relative order type of a set of points and hyperplanes, called amonochromatic cover, and derive lower bounds on its size in the worst case. We then show that the running time of any partitioning algorithm is bounded below by the size of some monochromatic cover. As a related result, using a straightforward adversary argument, we derive aquadratic lower bound on the complexity of Hopcroft's problem in a surprisingly powerful decision tree model of computation.  相似文献   

5.
In this paper, we consider the well-known resource-constrained project scheduling problem. We give some arguments that already a special case of this problem with a single type of resources is not approximable in polynomial time with an approximation ratio bounded by a constant. We prove that there exist instances for which the optimal makespan values for the non-preemptive and the preemptive problems have a ratio of O(logn), where n is the number of jobs. This means that there exist instances for which the lower bound of Mingozzi et al. has a bad relative error of O(logn), and the calculation of this bound is an NP-hard problem. In addition, we give a proof that there exists a type of instances for which known approximation algorithms with polynomial time complexity have an approximation ratio of at least equal to $O(\sqrt{n})$ , and known lower bounds have a relative error of at least equal to O(logn). This type of instances corresponds to the single machine parallel-batch scheduling problem 1|p?batch,b=∞|C max.  相似文献   

6.
《Journal of Complexity》1996,12(2):81-115
Given a univariate polynomialf(z) of degreenwith complex coefficients, whose norms are less than 2min magnitude, the root problem is to find all the roots off(z) up to specified precision 2−μ. Assuming the arithmetic model for computation, we provide an algorithm which has complexityO(nlog5nlogB), whereb= χ + μ, and χ = max{n,m}. This improves on the previous best known algorithm of Pan for the problem, which has complexityO(n2log2nlog(m+ μ)). A remarkable property of our algorithm is that it does not require any assumptions about the root separation off, which were either explicitly, or implicitly, required by previous algorithms. Moreover it also has a work-efficient parallel implementation. We also show that both the sequential and parallel implementations of the algorithm work without modification in the Boolean model of arithmetic. In this case, it follows from root perturbation estimates that we need only specify θ = ⌈n(B+ logn+ 3)⌉ bits of the binary representations of the real and imaginary parts of each of the coefficients off. We also show that by appropriate rounding of intermediate values, we can bound the number of bits required to represent all complex numbers occurring as intermediate quantities in the computation. The result is that we can restrict the numbers we use in every basic arithmetic operation to those having real and imaginary parts with at most φ bits, where[formula]and[formula]Thus, in the Boolean model, the overall work complexity of the algorithm is only increased by a multiplicative factor ofM(φ) (whereM(ψ) =O(ψ(log ψ) log log ψ) is the bit complexity for multiplication of integers of length ψ). The key result on which the algorithm is based, is a new theorem of Coppersmith and Neff relating the geometric distribution of the zeros of a polynomial to the distribution of the zeros of its high order derivatives. We also introduce several new techniques (splitting sets and “centered” points) which hinge on it. We also observe that our root finding algorithm can be efficiently parallelized to run in parallel timeO(log6nlogB) usingnprocessors.  相似文献   

7.
Recently an O(n4) volume algorithm has been presented for convex bodies by Lovász and Vempala, where n is the number of dimensions of the convex body. Essentially the algorithm is a series of Monte Carlo integrations. In this paper we describe a computer implementation of the volume algorithm, where we improved the computational aspects of the original algorithm by adding variance decreasing modifications: a stratified sampling strategy, double point integration and orthonormalised estimators. Formulas and methodology were developed so that the errors in each phase of the algorithm can be controlled. Some computational results for convex bodies in dimensions ranging from 2 to 10 are presented as well.  相似文献   

8.
In this paper, we present approximation algorithms for minimum vertex and edge guard problems for polygons with or without holes with a total of n vertices. For simple polygons, approximation algorithms for both problems run in O(n4) time and yield solutions that can be at most O(logn) times the optimal solution. For polygons with holes, approximation algorithms for both problems give the same approximation ratio of O(logn), but the running time of the algorithms increases by a factor of n to O(n5).  相似文献   

9.
An algorithm is presented which finds a maximum stable set of a family of n arcs on a circle in O(nlogn) time given the arcs as an unordered list of their endpoints or in O(n) time if they are already sorted. If we are given only the circular arc graph without a circular arc representation for it, then a maximum stable set can be found in O(n + δe) time where n, e, and δ are the number of vertices, edges, and minimum vertex degree, respectively. Our algorithms are based on a simple neighborhood reduction theorem which allows one to reduce any circular arc graph to a special canonical form.  相似文献   

10.
In this paper, we present parallel quicksort algorithms running inO((n/p+logp) logn) expected time andO((n/p+logp+log logn) logn) deterministic time respectively, and both withO(n) space by usingp processors on EREW PRAM. Whenp=O(n/logn), the cost is optimal, in terms of the product of time and number of processors. These algorithms can be used to obtain parallel algorithms for constructing balanced binary search trees without using sorting algorithms. One of our quicksort algorithms leads to a parallel quickhull algorithm on EREW PRAM.The work of this author was partially supported by a fellowship from the College of Science, Old Dominion University, Norfolk, VA 23529, USA.  相似文献   

11.
We present an algorithm to compute, inO(m + n log n) time, a maximum clique in circular-arc graphs (withnvertices andmedges) provided a circular-arc model of the graph is given. If the circular-arc endpoints are given in sorted order, the time complexity isO(m). The algorithm operates on the geometric structure of the circular arcs, radially sweeping their endpoints; it uses a very simple data structure consisting of doubly linked lists. Previously, the best time bound for this problem wasO(m log log n + n log n), using an algorithm that solved an independent subproblem for each of thencircular arcs. By using the radial-sweep technique, we need not solve each of these subproblems independently; thus we eliminate the log log nfactor from the running time of earlier algorithms. For vertex-weighted circular-arc graphs, it is possible to use our approach to obtain anO(m log log n + n log n) algorithm for finding a maximum-weight clique—which matches the best known algorithm.  相似文献   

12.
A new algorithm for rearranging a heap is presented and analysed in the average case. The average case upper bound for deleting the maximum element of a random heap is improved, and is shown to be less than [logn]+0.299+M(n) comparisons, *) whereM(n) is between 0 and 1. It is also shown that a heap can be constructed using 1.650n+O(logn) comparisons with this algorithm, the best result for any algorithm which does not use any extra space. The expected time to sortn elements is argued to be less thann logn+0.670n+O(logn), while simulation result points at an average case ofn log n+0.4n which will make it the fastest in-place sorting algorithm. The same technique is used to show that the average number of comparisons when deleting the maximum element of a heap using Williams' algorithm for rearrangement is 2([logn]–1.299+L(n)) whereL(n) also is between 0 and 1, and the average cost for Floyd-Williams Heapsort is at least 2nlogn–3.27n, counting only comparisons. An analysis of the number of interchanges when deleting the maximum element of a random heap, which is the same for both algorithms, is also presented.  相似文献   

13.
We present two new algorithms, ADT and MDT, for solving order-n Toeplitz systems of linear equations Tz = b in time O(n log2n) and space O(n). The fastest algorithms previously known, such as Trench's algorithm, require time Ω(n2) and require that all principal submatrices of T be nonsingular. Our algorithm ADT requires only that T be nonsingular. Both our algorithms for Toeplitz systems are derived from algorithms for computing entries in the Padé table for a given power series. We prove that entries in the Padé table can be computed by the Extended Euclidean Algorithm. We describe an algorithm EMGCD (Extended Middle Greatest Common Divisor) which is faster than the algorithm HGCD of Aho, Hopcroft and Ullman, although both require time O(n log2n), and we generalize EMGCD to produce PRSDC (Polynomial Remainder Sequence Divide and Conquer) which produces any iterate in the PRS, not just the middle term, in time O(n log2n). Applying PRSDC to the polynomials U0(x) = x2n+1 and U1(x) = a0 + a1x + … + a2nx2n gives algorithm AD (Anti-Diagonal), which computes any (m, p) entry along the antidiagonal m + p = 2n of the Padé table for U1 in time O(n log2n). Our other algorithm, MD (Main-Diagonal), computes any diagonal entry (n, n) in the Padé table for a normal power series, also in time O(n log2n). MD is related to Schönhage's fast continued fraction algorithm. A Toeplitz matrix T is naturally associated with U1, and the (n, n) Padé approximation to U1 gives the first column of T?1. We show how a formula due to Trench can be used to compute the solution z of Tz = b in time O(n log n) from the first row and column of T?1. Thus, the Padé table algorithms AD and MD give O(n log2n) Toeplitz algorithms ADT and MDT. Trench's formula breaks down in certain degenerate cases, but in such cases a companion formula, the discrete analog of the Christoffel-Darboux formula, is valid and may be used to compute z in time O(n log2n) via the fast computation (by algorithm AD) of at most four Padé approximants. We also apply our results to obtain new complexity bounds for the solution of banded Toeplitz systems and for BCH decoding via Berlekamp's algorithm.  相似文献   

14.
Convex hull (CH) is widely used in computer graphic, image processing, CAD/CAM, and pattern recognition. We investigate CH properties and derive new properties: (1) CH vertices’ coordinates monotonically increase or decrease, (2) The edge slopes monotonically decrease. Using these properties, we proposed two algorithms, i.e., CH algorithm for planar point set, and CH algorithm for two available CHs. The main ideas of the proposed algorithms are as follows. A planar point set is divided into several subsets by the extreme points, and vertices in these subsets are then separately calculated. During the computation, the CH properties are used to eliminate concave points. This can reduce the computational cost and then improves the speed. Our first algorithm can extract CH with O(nlogn) time, which is the lower bound of planar CH extraction, and the second algorithm can obtain CH with O(m+n) time at the worst case.  相似文献   

15.
We analyze a randomized pivoting process involving one line and n points in the plane. The process models the behavior of the RANDOM ‐EDGE simplex algorithm on simple polytopes with n facets in dimension n ? 2. We obtain a tight O(log2 n) bound for the expected number of pivot steps. This is the first nontrivial bound for RANDOM ‐EDGE , which goes beyond bounds for specific polytopes. The process itself can be interpreted as a simple algorithm for certain 2‐variable linear programming problems, and we prove a tight Θ(n) bound for its expected runtime. © 2003 Wiley Periodicals, Inc. Random Struct. Alg., 2003  相似文献   

16.
Three varieties of the closure of a set of iso-(oriented) rectangles, i.e., rectilin-early-oriented rectangles, are introduced. These are uni-directional, diagonal, and rectangular closure. First a strong decomposition theorem for diagonal closure in terms of uni-directional closure is proved. Then time and space optimal algorithms to compute uni-directional and diagonal closure, each running in O(nlogn) time and O(n) space, are described. An O(nlogn) time and space algorithm for rectangular closure is also described. The algorithm for diagonal closure has applications in database concurrency control: an O(nlogn) time and O(n) space algorithm for testing for safety and detecting deadlocks in locked transaction systems is obtained.  相似文献   

17.
We study exact algorithms for the MAX-CUT problem. Introducing a new technique, we present an algorithmic scheme that computes a maximum cut in graphs with bounded maximum degree. Our algorithm runs in time O*(2(1-(2/Δ))n). We also describe a MAX-CUT algorithm for general graphs. Its time complexity is O*(2mn/(m+n)). Both algorithms use polynomial space.  相似文献   

18.
Recently, Fredman and Tarjan invented a new, especially efficient form of heap (priority queue). Their data structure, theFibonacci heap (or F-heap) supports arbitrary deletion inO(logn) amortized time and other heap operations inO(1) amortized time. In this paper we use F-heaps to obtain fast algorithms for finding minimum spanning trees in undirected and directed graphs. For an undirected graph containingn vertices andm edges, our minimum spanning tree algorithm runs inO(m logβ (m, n)) time, improved fromO((m, n)) time, whereβ(m, n)=min {i|log(i) nm/n}. Our minimum spanning tree algorithm for directed graphs runs inO(n logn + m) time, improved fromO(n log n +m log log log(m/n+2) n). Both algorithms can be extended to allow a degree constraint at one vertex. Research supported in part by National Science Foundation Grant MCS-8302648. Research supported in part by National Science Foundation Grant MCS-8303139. Research supported in part by National Science Foundation Grant MCS-8300984 and a United States Army Research Office Program Fellowship, DAAG29-83-GO020.  相似文献   

19.
We study the problem of coloring graphs in an online manner. The only known deterministic online graph coloring algorithm with a sublinear performance function was found by [9.], 319–325). Their algorithm colors graphs of chromatic number χ with no more than (2χn)/log* n colors, where n is the number of vertices. They point out that the performance can be improved slightly for graphs with bounded chromatic number. For three-chromatic graphs the number of colors used, for example, is O(n log log log n/log log n). We show that randomization helps in coloring graphs online. We present a simple randomized online algorithm to color graphs with expected number of colors O(2χχ2n(χ−2)/(χ−1)(log n)1/(χ−1)). For three-colorable graphs the expected number of colors our algorithm uses is . All our algorithms run in polynomial time. It is interesting to note that our algorithm compares well with the best known polynomial time offline algorithms. For instance, the best polynomial time algorithm known for three-colorable graphs, due to [4.] pp. 554–562). We also prove a lower bound of Ω((1/(χ − 1))((log n/(12(χ + 1))) − 1)χ−1) for the randomized model. No lower bound for the randomized model was previously known. For bounded χ, our result improves even the best known lower bound for the deterministic case: Ω((log n/log log n)χ−1), due to Noga Alon (personal communication, September 1989).  相似文献   

20.
We revisit the well-known problem of sorting under partial information: sort a finite set given the outcomes of comparisons between some pairs of elements. The input is a partially ordered set P, and solving the problem amounts to discovering an unknown linear extension of P, using pairwise comparisons. The information-theoretic lower bound on the number of comparisons needed in the worst case is log e(P), the binary logarithm of the number of linear extensions of P. In a breakthrough paper, Jeff Kahn and Jeong Han Kim (J. Comput. System Sci. 51 (3), 390–399, 1995) showed that there exists a polynomial-time algorithm for the problem achieving this bound up to a constant factor. Their algorithm invokes the ellipsoid algorithm at each iteration for determining the next comparison, making it impractical. We develop efficient algorithms for sorting under partial information. Like Kahn and Kim, our approach relies on graph entropy. However, our algorithms differ in essential ways from theirs. Rather than resorting to convex programming for computing the entropy, we approximate the entropy, or make sure it is computed only once, in a restricted class of graphs, permitting the use of a simpler algorithm. Specifically, we present:
  1. an O(n 2) algorithm performing O(logn·log e(P)) comparisons
  2. an O(n 2:5) algorithm performing at most (1+?) log e(P)+O ? (n) comparisons
  3. an O(n 2:5) algorithm performing O(log e(P)) comparisons.
All our algorithms can be implemented in such a way that their computational bottleneck is confined in a preprocessing phase, while the sorting phase is completed in O(q)+O(n) time, where q denotes the number of comparisons performed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号