首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Ray Shooting Amidst Convex Polygons in 2D   总被引:1,自引:0,他引:1  
We consider the problem of ray shooting in a two-dimensional scene consisting ofmconvex polygons with a total ofnedges. We present a data structure that requiresO(mn log m) space and preprocessing time and that answers a ray shooting query inO(log2 m log2 n) time. If the polygons are pairwise disjoint, the space and preprocessing time can be improved toO((m2+n)log m) andO((m2+n log n)log m), respectively. Our algorithm also works for a collection of disjoint simple polygons. We also show that if we allow onlyO(n) space, a ray shooting query among a collection of disjoint simple polygons can be answered in timeO(m/[formula]1+ log2 n) time, for any >0.  相似文献   

2.
We propose a parallel algorithm which reduces the problem of computing Hamiltonian cycles in tournaments to the problem of computing Hamiltonian paths. The running time of our algorithm is O(log n) using O(n2/log n) processors on a CRCW PRAM, and O(log n log log n) on an EREW PRAM using O(n2/log n log log n) processors. As a corollary, we obtain a new parallel algorithm for computing Hamiltonian cycles in tournaments. This algorithm can be implemented in time O(log n) using O(n2/log n) processors in the CRCW model and in time O(log2n) with O(n2/log n log log n) processors in the EREW model.  相似文献   

3.
We consider the problem of computing the minimum ofnvalues, and several well-known generalizations [prefix minima, range minima, and all nearest smaller values (ANSV)] for input elements drawn from the integer domain [1···s], wheresn. In this article we give simple and efficient algorithms for all of the preceding problems. These algorithms all takeO(log log log s) time using an optimal number of processors andO(nsε) space (for constant ε < 1) on the COMMON CRCW PRAM. The best known upper bounds for the range minima and ANSV problems were previouslyO(log log n) (using algorithms for unbounded domains). For the prefix minima and for the minimum problems, the improvement is with regard to the model of computation. We also prove a lower bound of Ω(log log n) for domain sizes = 2Ω(log n log log n). Since, forsat the lower end of this range, log log n = Ω(log log log s), this demonstrates that any algorithm running ino(log log log s) time must restrict the range ofson which it works.  相似文献   

4.
We consider random walks on several classes of graphs and explore the likely structure of the vacant set, i.e. the set of unvisited vertices. Let Γ(t) be the subgraph induced by the vacant set of the walk at step t. We show that for random graphs Gn,p (above the connectivity threshold) and for random regular graphs Gr,r ≥ 3, the graph Γ(t) undergoes a phase transition in the sense of the well‐known ErdJW‐RSAT1100590x.png ‐Renyi phase transition. Thus for t ≤ (1 ‐ ε)t*, there is a unique giant component, plus components of size O(log n), and for t ≥ (1 + ε)t* all components are of size O(log n). For Gn,p and Gr we give the value of t*, and the size of Γ(t). For Gr, we also give the degree sequence of Γ(t), the size of the giant component (if any) of Γ(t) and the number of tree components of Γ(t) of a given size k = O(log n). We also show that for random digraphs Dn,p above the strong connectivity threshold, there is a similar directed phase transition. Thus for t ≤ (1 ‐ ε)t*, there is a unique strongly connected giant component, plus strongly connected components of size O(log n), and for t ≥ (1 + ε)t* all strongly connected components are of size O(log n). © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2012  相似文献   

5.
We present an algorithm to compute, inO(m + n log n) time, a maximum clique in circular-arc graphs (withnvertices andmedges) provided a circular-arc model of the graph is given. If the circular-arc endpoints are given in sorted order, the time complexity isO(m). The algorithm operates on the geometric structure of the circular arcs, radially sweeping their endpoints; it uses a very simple data structure consisting of doubly linked lists. Previously, the best time bound for this problem wasO(m log log n + n log n), using an algorithm that solved an independent subproblem for each of thencircular arcs. By using the radial-sweep technique, we need not solve each of these subproblems independently; thus we eliminate the log log nfactor from the running time of earlier algorithms. For vertex-weighted circular-arc graphs, it is possible to use our approach to obtain anO(m log log n + n log n) algorithm for finding a maximum-weight clique—which matches the best known algorithm.  相似文献   

6.
Gibbs sampling also known as Glauber dynamics is a popular technique for sampling high dimensional distributions defined on graphs. Of special interest is the behavior of Gibbs sampling on the Erd?s‐Rényi random graph G(n,d/n), where each edge is chosen independently with probability d/n and d is fixed. While the average degree in G(n,d/n) is d(1 ‐ o(1)), it contains many nodes of degree of order log n/log log n. The existence of nodes of almost logarithmic degrees implies that for many natural distributions defined on G(n,p) such as uniform coloring (with a constant number of colors) or the Ising model at any fixed inverse temperature β, the mixing time of Gibbs sampling is at least n1+Ω(1/log log n). Recall that the Ising model with inverse temperature β defined on a graph G = (V,E) is the distribution over {±}Vgiven by . High degree nodes pose a technical challenge in proving polynomial time mixing of the dynamics for many models including the Ising model and coloring. Almost all known sufficient conditions in terms of β or number of colors needed for rapid mixing of Gibbs samplers are stated in terms of the maximum degree of the underlying graph. In this work, we show that for every d < ∞ and the Ising model defined on G (n, d/n), there exists a βd > 0, such that for all β < βd with probability going to 1 as n →∞, the mixing time of the dynamics on G (n, d/n) is polynomial in n. Our results are the first polynomial time mixing results proven for a natural model on G (n, d/n) for d > 1 where the parameters of the model do not depend on n. They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than npolylog(n). Our proof exploits in novel ways the local tree like structure of Erd?s‐Rényi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex v of the graph has a neighborhood N(v) of radius O(log n) in which the induced sub‐graph is a tree union at most O(log n) edges and where for each simple path in N(v) the sum of the vertex degrees along the path is O(log n). Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for G(n, d/n) it applies for all external fields and β < βd, where d tanh(βd) = 1 is the critical point for decay of correlation for the Ising model on G(n, d/n). © 2009 Wiley Periodicals, Inc. Random Struct. Alg., 2009  相似文献   

7.
This article considers informative labeling schemes for graphs. Specifically, the question introduced is whether it is possible to label the vertices of a graph with short labels in such a way that the distance between any two vertices can be inferred from inspecting their labels. A labeling scheme enjoying this property is termed a proximity‐preserving labeling scheme. It is shown that, for the class of n‐vertex weighted trees with M‐bit edge weights, there exists such a proximity‐preserving labeling scheme using O(M log n + log2n) bit labels. For the family of all n‐vertex unweighted graphs, a labeling scheme is proposed that using O(log2 n · κ · n1/κ) bit labels can provide approximate estimates to the distance, which are accurate up to a factor of In particular, using O(log3n) bit labels, the scheme can provide estimates accurate up to a factor of $\sqrt{2 \log n}$. (For weighted graphs, one of the log n factors in the label size is replaced by a factor logarithmic in the network's diameter.) © 2000 John Wiley & Sons, Inc. J Graph Theory 33: 167–176, 2000  相似文献   

8.
In this paper, we face the problem of computing an enclosing pair of axis-parallel rectangles of a set of polygonal objects in the plane, serving as a simple container. We propose anO(nα(n)log n) worst-case time algorithm, where α( ) is the inverse Ackermann's function, for finding, given a setMof points, segments and polygons defined bynvertices, a pair of axis-parallel rectangles (s, t) such thatstencloses all objects inMand area(s)+area(t) is minimum. The algorithm works inO(nα(n) log log n) worst-case space. Moreover, we prove an Ω(n log n) lower bound for the one-dimensional version of the problem. We also show that for the special case of enclosing a set of polygons with axis-parallel sides, our algorithm runs in optimal worst-case timeO(n log n), using worst-case spaceO(n log log n).  相似文献   

9.
In this paper, we develop two algorithms for finding a directed path of minimum rank-two monotonic cost between two specified nodes in a network with n nodes and m arcs. Under the condition that one of the vectors characterizing the cost function f is binary, one yields an optimal solution in O(n3) or O(nm log n) time if f is quasiconcave; the other solves any problem in O(nm + n 2 log n) time.  相似文献   

10.
In physical VLSI design, network design (wiring) is the most time-consuming phase. For solving global wiring problems, we propose to first compute from the layout geometry a graph that preserves all shortest paths between pairs of relevant points, and then to operate on that graph for computing shortest paths, Steiner minimal tree approximations, or the like. For a set of points and a set of simple orthogonal polygons as obstacles in the plane, withn input points (polygon corner or other) altogether, we show how a shortest paths preserving graph of sizeO(n logn) can be computed in timeO(n logn) in the worst case, with spaceO(n). We illustrate the merits of this approach with a simple example: If the length of a longest edge in the graph is bounded by a polynomial inn, an assumption that is clearly fulfilled for graphs derived from VLSI layout geometries, then a shortest path can be computed in timeO(n logn log logn) in the worst case; this result improves on the known best one ofO(n(logn)3/2).  相似文献   

11.
In this paper we present an efficient algorithm to test if two given paths are homotopic; that is, whether they wind around obstacles in the plane in the same way. For paths specified by n line segments with obstacles described by n points, several standard ways achieve quadratic running time. For simple paths, our algorithm runs in O(n log n) time, which we show is tight. For self-intersecting paths the problem is related to Hopcrofts problem; our algorithm runs in O(n 3/2log n) time.  相似文献   

12.
The dynamic programming algorithm of [12.] for the bandwidth minimization problem is improved. It is shown that, for all k > 1, BANDWIDTH(k) can be solved in O(nk) steps and simultaneous O(nk) space, where n is the number of vertices in the graph, and that each such problem is in NSPACE(log n). The same improved dynamic programming algorithm approach works to show that the MINCUT LINEAR ARRANGEMENT problem restricted to the fixed value k, denoted by MINCUT(k), is solvable in O(nk) steps and simultaneous O(nk) space and is in the class NSPACE(log n).  相似文献   

13.
Let p(n) be the function that counts the number of partitions of n. Let b ≥ 2 be a fixed positive integer. In this paper, we show that for almost all n the sum of the digits of p(n) in base b is at least log n/(7log log n). Our proof uses the first term of Rademacher’s formula for p(n).  相似文献   

14.
We present a parallel randomized algorithm running on aCRCW PRAM, to determine whether two planar graphs are isomorphic, and if so to find the isomorphism. We assume that we have a tree of separators for each planar graph (which can be computed by known algorithms inO(log2 n) time withn1 + εprocessors, for any ε > 0). Ifnis the number of vertices, our algorithm takesO(log(n)) time with processors and with a probability of failure of 1/nat most. The algorithm needs 2 · log(m) − log(n) + O(log(n)) random bits. The number of random bits can be decreased toO(log(n)) by increasing the number of processors ton3/2 + ε, for any ε > 0. Our parallel algorithm has significantly improved processor efficiency, compared to the previous logarithmic time parallel algorithm of Miller and Reif (Siam J. Comput.20(1991), 1128–1147), which requiresn4randomized processors orn5deterministic processors.  相似文献   

15.
We achieve anO(log n) amortized time bound per operation for the off-line version of the dynamic convex hull problem in the plane. In this problem, a sequence ofninsert,delete, andqueryinstructions are to be processed, where each insert instruction adds a new point to the set, each delete instruction removes an existing point, and each query requests a standard convex hull search. We process the entire sequence in totalO(n log n) time andO(n) space. Alternatively, we can preprocess a sequence ofninsertions and deletions inO(n log n) time and space, then answer queries in history inO(log n) time apiece (a query in history means a query comes with a time parametert, and it must be answered with respect to the convex hull present at timet). The same bounds also hold for the off-line maintenance of several related structures, such as the maximal vectors, the intersection of half-planes, and the kernel of a polygon. Achieving anO(log n) per-operation time bound for theon-lineversions of these problems is a longstanding open problem in computational geometry.  相似文献   

16.
Interval allocation has been suggested as a possible formalization for the PRAM of the (vaguely defined) processor allocation problem, which is of fundamental importance in parallel computing. The interval allocation problem is, given n nonnegative integers x1, . . ., xn, to allocate n nonoverlapping subarrays of sizes x1, . . ., xn from within a base array of Onj=1xj) cells. We show that interval allocation problems of size n can be solved in O((log log n)3) time with optimal speedup on a deterministic CRCW PRAM. In addition to a general solution to the processor allocation problem, this implies an improved deterministic algorithm for the problem of approximate summation. For both interval allocation and approximate summation, the fastest previous deterministic algorithms have running times of Θ(log n/log log n). We describe an application to the problem of computing the connected components of an undirected graph. Finally we show that there is a nonuniform deterministic algorithm that solves interval allocation problems of size n in O(log log n) time with optimal speedup.  相似文献   

17.
Singleton attractor (also called fixed point) detection is known to be NP-hard even for AND/OR Boolean networks (AND/OR BNs in short, i.e., BNs consisting of AND/OR nodes), where BN is a mathematical model of genetic networks and singleton attractors correspond to steady states. In our recent paper, we developed an O(1.787n) time algorithm for detecting a singleton attractor of a given AND/OR BN where n is the number of nodes. In this paper, we present an O(1.757n) time algorithm with which we succeeded in improving the above algorithm. We also show that this problem can be solved in time, which is less than O((1 + ∈)n) for any positive constant ∈, when a BN is planar. A preliminary version of this paper has appeared in Proc. 3rd International Conference on Algebraic Biology (AB2008) [27].  相似文献   

18.
We consider Las Vegas randomized dynamic algorithms for on-line connectivity problems with deletions only. In particular, we show that starting from a graph with m edges and n nodes, we can maintain a spanning forest during m deletions in O(m log(n2/m) + n(log n)3(log log n)2) expected time, which is O(m) if m = Θ(n2) and O(m log n) if m = Ω(n(log n log log n)2). The deletions may be interspersed with connectivity queries, each of which is answered in constant time. The previous best bound was O(m log2 n) by Henzinger and Thorup which covered both insertions and deletions. The result is based on a general randomized reduction for edge connectivity problems of many deletions-only queries to a few deletions and insertions queries. For 2-edge connectivity, the complexity is improved from O(m(log n)5) to O(m log(n2/m) + n(log n)6(log log n)2). For the general decremental k-edge-connectivity problem, we get a total running time of O(k2n2 polylog n). Here the previous best bound was O(kmn polylog n). Improved running times are also achieved for the static consensus tree problem, with applications to computational biology and relational data bases.  相似文献   

19.
We study the average‐case complexity of shortest‐paths problems in the vertex‐potential model. The vertex‐potential model is a family of probability distributions on complete directed graphs with arbitrary real edge lengths, but without negative cycles. We show that on a graph with n vertices and with respect to this model, the single‐source shortest‐paths problem can be solved in O(n2) expected time, and the all‐pairs shortest‐paths problem can be solved in O(n2 log n) expected time. ©2000 John Wiley & Sons, Inc. Random Struct. Alg., 16, 33–46, 2000  相似文献   

20.
Three related rectangle intersection problems in k-dimensional space are considered: (1) find the intersections of a rectangle with a given set of rectangles, (2) find the intersecting pairs of rectangles as they are inserted into or deleted from an existing set of rectangles, and (3) find the intersecting pairs of a given set of rectangles. By transforming these problems into range search problems, one need not divide the intersection problem into two subproblems, namely, the edge-intersecting problem and the containment problem, as done by many previous studies. Furthermore, this approach can also solve these subproblems separately, if required. For the first problem the running time is O((log n)2k−1 + s), where s is the number of intersecting pairs of rectangles. For the second problem the time needed to generate and maintain n rectangles is O(n(log n)2k) and the time for each query is O((log n)2k−1 + s). For the third problem the total time is O(n log n + n(log n)2(k−1) + s) for k ≥ 1.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号