首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In 1987, Northby presented an efficient lattice based search and optimization procedure to compute ground states ofn-atom Lennard-Jones clusters and reported putative global minima for 13n150. In this paper, we introduce simple data structures which reduce the time complexity of the Northby algorithm for lattice search fromO(n5/3) per move toO(n2/3) per move for ann-atom cluster involving full Lennard-Jones potential function. If nearest neighbor potential function is used, the time complexity can be further reduced toO(logn) per move for ann-atom cluster. The lattice local minimizers with lowest potential function values are relaxed by a powerful Truncated Newton algorithm. We are able to reproduce the minima reported by Northby. The improved algorithm is so efficient that less than 3 minutes of CPU time on the Cray-XMP is required for each cluster size in the above range. We then further improve the Northby algorithm by relaxingevery lattice local minimizer found in the process. This certainly requires more time. However, lower energy configurations were found with this improved algorithm forn=65, 66, 75, 76, 77 and 134. These findings also show that in some cases, the relaxation of a lattice local minimizer with a worse potential function value may lead to a local minimizer with a better potential function value.  相似文献   

2.
A parallel algorithm for depth-first searching of a directed acyclic graph (DAG) on a shared memory model of a SIMD computer is proposed. The algorithm uses two parallel tree traversal algorithms, one for the preorder traversal and the other for therpostorder traversal of an ordered tree. Each of these traversal algorithms has a time complexity ofO(logn) whenO(n) processors are used,n being the number of vertices in the tree. The parallel depth-first search algorithm for a directed acyclic graphG withn vertices has a time complexity ofO((logn)2) whenO(n 2.81/logn) processors are used.  相似文献   

3.
This article proposes a new algorithm for cross-validation of the best linear unbiased predictor. The algorithm relies on a new technique for downdating the inverse of a Cholesky factor. Given n data points, the new algorithm has complexity O(n3), compared to O(n4), which is the order for the more traditional delete one and recalculate method.  相似文献   

4.
In this paper, we consider the updating problems to reconstruct the biconnected-components and to reconstruct the weighted shortest path in response to the topology change of the network. We propose two distributed algorithms. The first algorithm solves the updating problem that reconstructs the biconnected-components after the several processors and links are added and deleted. Its bit complexity is O((n′ +a +d) logn′), its message complexity is O(n′ +a +d), the ideal time complexity isO(n′), and the space complexity isO(e logn +e′ logn′). The second algorithm solves the updating problem that reconstructs the weighted shortest path. Its message complexity and ideal-time complexity areO(u 2 +a +n′) respectively.  相似文献   

5.
An algorithm for finding a polygon with minimum number of edges nested in two simplen-sided polygons is presented. The algorithm solves the problem in at mostO(n logn) time, and improves the time complexity of two previousO(n 2) algorithms.The work was supported by NSERC grant OPG0041629.  相似文献   

6.
A general sorting algorithm, having the well knownO(n 2) algorithmsStraight Insertion Sort andSelection Sort as special cases, is described. This algorithm is analyzed in the case that certain choices in the algorithm are done randomly, and this yields an algorithm that has an average complexity ofO(n 1.5) and a worst case complexity ofO(n 2). However, making random choices (by some random number generator) is time consuming, and a simple quasi-random scheme is therefore implemented. Test runs indicate that also this version has average complexity ofO(n 1.5), and it even seems to perform better than the version using real random choices (even if we disregard the time used for the random choices). This version also needs very little administrative overhead, and it therefore compares favourably to many other sorting algorithms for small and medium sized arrays.  相似文献   

7.
We study exact algorithms for the MAX-CUT problem. Introducing a new technique, we present an algorithmic scheme that computes a maximum cut in graphs with bounded maximum degree. Our algorithm runs in time O*(2(1-(2/Δ))n). We also describe a MAX-CUT algorithm for general graphs. Its time complexity is O*(2mn/(m+n)). Both algorithms use polynomial space.  相似文献   

8.
We draw on the observation that the amount of heat diffusing outside of a heated body in a short period of time is proportional to its surface area, to design a simple algorithm for approximating the surface area of a convex body given by a membership oracle. Our method has a complexity of O*(n4), where n is the dimension, compared to O*(n8) for the previous best algorithm. We show that our complexity cannot be improved given the current state‐of‐the‐art in volume estimation. © 2013 Wiley Periodicals, Inc. Random Struct. Alg., 43, 407–428, 2013  相似文献   

9.
Enumeration of spanning trees of an undirected graph is one of the graph problems that has received much attention in the literature. In this paper a new enumeration algorithm based on the idea of contractions of the graph is presented. The worst-case time complexity of the algorithm isO(n+m+nt) wheren is the number of vertices,m the number of edges, andt the number of spanning trees in the graph. The worst-case space complexity of the algorithm isO(n 2). Computational analysis indicates that the algorithm requires less computation time than any other of the previously best-known algorithms.  相似文献   

10.
A global recursive bisection algorithm is described for computing the complex zeros of a polynomial. It has complexityO(n 3 p) wheren is the degree of the polynomial andp the bit precision requirement. Ifn processors are available, it can be realized in parallel with complexityO(n 2 p); also it can be implemented using exact arithmetic. A combined Wilf-Hansen algorithm is suggested for reduction in complexity.  相似文献   

11.
A new graph theoretical algorithm to calculate Katz status scores reduces computational complexity from time O(n 3) to O(n + m). Randomly-generated graphs as well as data from a large empiric study are used to test the performance of two commercial network analysis packages (GRADAP and UCINET V), compared to the performance achieved by the authors' algorithm, implemented in Visual Basic.  相似文献   

12.
This paper is motivated by a method used for DNA sequencing by hybridization presented in [Jacek Blazewicz, Marta Kasprzak, Computational complexity of isothermic DNA sequencing by hybridization, Discrete Appl. Math. 154 (5) (2006) 718–729]. This paper presents a class of digraphs: the quasi-adjoint graphs. This class includes the ones used in the paper cited above. A polynomial recognition algorithm in O(n3), as well as a polynomial algorithm in O(n2+m2) for finding a Hamiltonian circuit in these graphs are given. Furthermore, some results about related problems such as finding a Eulerian circuit while respecting some forbidden transitions (a path with three vertices) are discussed.  相似文献   

13.
Consider a set of n unit time jobs, each one having a release date, a due date, both nonnegative integers, and a weight, a positive real number. Given a set of m parallel machines, we describe an algorithm for finding schedules with minimum weighted number of tardy jobs. The complexity of the proposed algorithm is O(n2\frac(1+logm)m)O(n^{2}\frac{(1+\log m)}{m}) . The best previous algorithm for this problem has complexity O(mn 3) and employs network flow techniques. Our method is based on a characterization for schedules of this type and employs graph theoretic tools.  相似文献   

14.
The QR algorithm is one of the classical methods to compute the eigendecomposition of a matrix. If it is applied on a dense n × n matrix, this algorithm requires O(n3) operations per iteration step. To reduce this complexity for a symmetric matrix to O(n), the original matrix is first reduced to tridiagonal form using orthogonal similarity transformations. In the report (Report TW360, May 2003) a reduction from a symmetric matrix into a similar semiseparable one is described. In this paper a QR algorithm to compute the eigenvalues of semiseparable matrices is designed where each iteration step requires O(n) operations. Hence, combined with the reduction to semiseparable form, the eigenvalues of symmetric matrices can be computed via intermediate semiseparable matrices, instead of tridiagonal ones. The eigenvectors of the intermediate semiseparable matrix will be computed by applying inverse iteration to this matrix. This will be achieved by using an O(n) system solver, for semiseparable matrices. A combination of the previous steps leads to an algorithm for computing the eigenvalue decompositions of semiseparable matrices. Combined with the reduction of a symmetric matrix towards semiseparable form, this algorithm can also be used to calculate the eigenvalue decomposition of symmetric matrices. The presented algorithm has the same order of complexity as the tridiagonal approach, but has larger lower order terms. Numerical experiments illustrate the complexity and the numerical accuracy of the proposed method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

15.
Optimally Cutting a Surface into a Disk   总被引:1,自引:0,他引:1  
We consider the problem of cutting a subset of the edges of a polyhedral manifold surface, possibly with boundary, to obtain a single topological disk, minimizing either the total number of cut edges or their total length. We show that this problem is NP-hard in general, even for manifolds without boundary and for punctured spheres. We also describe an algorithm with running time n O(g+k), where n is the combinatorial complexity, g is the genus, and k is the number of boundary components of the input surface. Finally, we describe a greedy algorithm that outputs a O(log2 g)-approximation of the minimum cut graph in O(g 2 n log n) time.  相似文献   

16.
We propose a parallel algorithm which reduces the problem of computing Hamiltonian cycles in tournaments to the problem of computing Hamiltonian paths. The running time of our algorithm is O(log n) using O(n2/log n) processors on a CRCW PRAM, and O(log n log log n) on an EREW PRAM using O(n2/log n log log n) processors. As a corollary, we obtain a new parallel algorithm for computing Hamiltonian cycles in tournaments. This algorithm can be implemented in time O(log n) using O(n2/log n) processors in the CRCW model and in time O(log2n) with O(n2/log n log log n) processors in the EREW model.  相似文献   

17.
We prove that, for any constant ɛ>0, the complexity of the vertical decomposition of a set ofn triangles in three-dimensional space isO(n 2+ɛ +K), whereK is the complexity of the arrangement of the triangles. For a single cell the complexity of the vertical decomposition is shown to beO(n 2+ɛ ). These bounds are almost tight in the worst case. We also give a deterministic output-sensitive algorithm for computing the vertical decomposition that runs inO(n 2 logn+V logn) time, whereV is the complexity of the decomposition. The algorithm is reasonably simple (in particular, it tries to perform as much of the computation in two-dimensional spaces as possible) and thus is a good candidate for efficient implementations. The algorithm is extended to compute the vertical decomposition of arrangements ofn algebraic surface patches of constant maximum degree in three-dimensional space in timeO(nλ q (n) logn +V logn), whereV is the combinatorial complexity of the vertical decomposition, λ q (n) is a near-linear function related to Davenport-Schinzel sequences, andq is a constant that depends on the degree of the surface patches and their boundaries. We also present an algorithm with improved running time for the case of triangles which is, however, more complicated than the first algorithm. Mark de Berg was supported by the Dutch Organization for Scientific Research (N.W.O.), and by ESPRIT Basic Research Action No. 7141 (project ALCOM II:Algorithms and Complexity). Leonidas Guibas was supported by NSF Grant CCR-9215219, by a grant from the Stanford SIMA Consortium, by NSF/ARPA Grant IRI-9306544, and by grants from the Digital Equipment, Mitsubishi, and Toshiba Corporations. Dan Halperin was supported by a Rothschild Postdoctoral Fellowship, by a grant from the Stanford Integrated Manufacturing Association (SIMA), by NSF/ARPA Grant IRI-9306544, and by NSF Grant CCR-9215219. A preliminary version of this paper appeared inProc. 10th ACM Symposium on Computational Geometry, 1994, pp. 1–10.  相似文献   

18.
The peeling of a d-dimensional set of points is usually performed with successive calls to a convex hull algorithm; the optimal worst-case convex hull algorithm, known to have an O(n˙ Log (n)) execution time, may give an O(n˙n˙ Log (n)) to peel all the set; an O(n˙n) convex hull algorithm, m being the number of extremal points, is shown to peel every set with an O(n-n) time, and proved to be optimal; an implementation of this algorithm is given for planar sets and spatial sets, but the latter give only an approximate O(n˙n) performance.  相似文献   

19.
We propose a very simple ray-shooting algorithm, whose only data structure is a triangulation. The query algorithm, after locating the triangle containing the origin of the ray, walks along the ray, advancing from one triangle to a neighboring one until the polygon boundary is reached. The key result of the paper is a Steiner triangulation of a simple polygon with the property that a ray can intersect at most O(log n) triangles before reaching the polygon boundary. We are able to compute such a triangulation in linear sequential time, or in O(log n) parallel time using O(n/log n) processors. This gives a simple, yet optimal, ray-shooting algorithm for a simple polygon. Using a well-known technique, we can extend our triangulation procedure to a multiconnected polygon with k components and n vertices, so that a ray intersects at most O(√k log n) triangles.  相似文献   

20.
A simple parallel randomized algorithm to find a maximal independent set in a graph G = (V, E) on n vertices is presented. Its expected running time on a concurrent-read concurrent-write PRAM with O(|E|dmax) processors is O(log n), where dmax denotes the maximum degree. On an exclusive-read exclusive-write PRAM with O(|E|) processors the algorithm runs in O(log2n). Previously, an O(log4n) deterministic algorithm was given by Karp and Wigderson for the EREW-PRAM model. This was recently (independently of our work) improved to O(log2n) by M. Luby. In both cases randomized algorithms depending on pairwise independent choices were turned into deterministic algorithms. We comment on how randomized combinatorial algorithms whose analysis only depends on d-wise rather than fully independent random choices (for some constant d) can be converted into deterministic algorithms. We apply a technique due to A. Joffe (1974) and obtain deterministic construction in fast parallel time of various combinatorial objects whose existence follows from probabilistic arguments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号