共查询到20条相似文献,搜索用时 31 毫秒
1.
We give an algorithm for triangulating n-vertex polygonal regions (with holes) so that no angle in the final triangulation measures more than π/2. The number of triangles
in the triangulation is only O(n), improving a previous bound of O( n
2), and the running time is O( n log 2
n). The basic technique used in the algorithm, recursive subdivision by disks, is new and may have wider application in mesh
generation. We also report on an implementation of our algorithm.
The research of S. Mitchell was supported by the Applied Mathematical Sciences program, U.S. Department of Energy Research
and by the U.S. Department of Energy under Contract DE-AC04-76DP00789. J. Ruppert's work was performed while he was at the
NASA Ames Research Center as an employee of Computer Sciences Corporation, under NASA Contrast NAS 2-12961. 相似文献
2.
We apply a tabu search method to a scheduling problem of a company producing cables for cars: the task is to determine on
what machines and in which order the cable jobs should be produced in order to save production costs. First, the problem is
modeled as a combinatorial optimization problem. We then employ a tabu search algorithm as an approach to solve the specific
problem of the company, adapt various intensification as well as diversification strategies within the algorithm, and demonstrate
how these different implementations improve the results. Moreover, we show how the computational cost in each iteration of
the algorithm can be reduced drastically from O( n
3) (naive implementation) to O( n) (smart implementation) by exploiting the specific structure of the problem ( n refers to the number of cable orders). 相似文献
3.
The task of fitting smoothing spline surfaces to meteorological data such as temperature or rainfall observations is computationally
intensive. The generalized cross validation (GCV) smoothing algorithm, if implemented using direct matrix techniques, is O( n
3) computationally, and memory requirements are O( n
2). Thus, for data sets larger than a few hundred observations, the algorithm is prohibitively slow. The core of the algorithm
consists of solving series of shifted linear systems, and iterative techniques have been used to lower the computational complexity
and facilitate implementation on a variety of supercomputer architectures. For large data sets though, the execution time
is still quite high. In this paper we describe a Lanczos based approach that avoids explicitly solving the linear systems
and dramatically reduces the amount of time required to fit surfaces to sets of data.
相似文献
4.
The peeling of a d-dimensional set of points is usually performed with successive calls to a convex hull algorithm; the optimal worst-case convex hull algorithm, known to have an O( n˙ Log (n)) execution time, may give an O( n˙n˙ Log (n)) to peel all the set; an O(n ˙n) convex hull algorithm, m being the number of extremal points, is shown to peel every set with an O( n- n) time, and proved to be optimal; an implementation of this algorithm is given for planar sets and spatial sets, but the latter give only an approximate O( n˙n) performance. 相似文献
5.
The time complexity for testing whether an n-by- n real matrix is a P-matrix is reduced from O(2 n
n
3) to O(2
n
) by applying recursively a criterion for P-matrices based on Schur complementation. A Matlab program implementing the associated algorithm is provided. 相似文献
6.
Existing implementations of Munkres' algorithm for the optimal assignment problem are shown to require O( n
4) time in the worst n× n case. A new implementation is presented which runs in worst-case time O( n
3) and compares favorably in performance with the algorithm of Edmonds and Karp for this problem.The results of this paper were obtained by the author while at the Department of Computer Science, Cornell University. This work was supported in part by a Vanderbilt University Research Council Grant. 相似文献
7.
In this paper, we study a direct parallel-in-time (PinT) algorithm for first- and second-order time-dependent differential equations. We use a second-order boundary value method as the time integrator. Instead of solving the corresponding all-at-once system iteratively, we diagonalize the time discretization matrix B, which yields a direct parallel implementation across all time steps. A crucial issue of this methodology is how the condition number (denoted by Cond2(V )) of the eigenvector matrix V of B behaves as n grows, where n is the number of time steps. A large condition number leads to large roundoff error in the diagonalization procedure, which could seriously pollute the numerical accuracy. Based on a novel connection between the characteristic equation and the Chebyshev polynomials, we present explicit formulas for V and V− 1, by which we prove that Cond\(_{2}(V)=\mathcal {O}(n^{2})\). This implies that the diagonalization process is well-conditioned and the roundoff error only increases moderately as n grows, and thus, compared to other direct PinT algorithms, a much larger n can be used to yield satisfactory parallelism. A fast structure-exploiting algorithm is also designed for computing the spectral diagonalization of B. Numerical results on parallel machine are given to support our findings, where over 60 times speedup is achieved with 256 cores. 相似文献
8.
In this paper we present an algorithm for enumerating without repetitions all the non-crossing generically minimally rigid
bar-and-joint frameworks under edge constraints, which we call constrained non-crossing Laman frameworks, on a given set of
n points in the plane. Our algorithm is based on the reverse search paradigm of Avis and Fukuda. It generates each output graph
in O( n
4) time and O( n) space, or, with a slightly different implementation, in O( n
3) time and O( n
2) space. In particular, we obtain that the set of all the constrained non-crossing Laman frameworks on a given point set is
connected by flips which preserve the Laman property.
D. Avis’s research was supported by NSERC and FQRNT grants.
N. Katoh’s, M. Ohsaki’s and S.-i. Tanigawa’s research was supported by NEXT Grant-in-Aid for Scientific Research on priority
areas of New Horizons in Computing.
I. Streinu’s research was supported by NSF grant CCF-0430990 and NSF-DARPA CARGO CCR-0310661. 相似文献
9.
In this paper we present an optimal algorithm to solve the all-pairs shortest path problem on permutation graphs with n vertices and m edges which runs in O( n
2) time. Using this algorithm, the average distance of a permutation graph can also be computed in O( n
2) time. 相似文献
10.
A trajectory is a sequence of locations, each associated with a timestamp, describing the movement of a point. Trajectory data is becoming increasingly available and the size of recorded trajectories is getting larger. In this paper we study the problem of compressing planar trajectories such that the most common spatio-temporal queries can still be answered approximately after the compression has taken place. In the process, we develop an implementation of the Douglas–Peucker path-simplification algorithm which works efficiently even in the case where the polygonal path given as input is allowed to self-intersect. For a polygonal path of size n, the processing time is O( nlog kn) for k=2 or k=3 depending on the type of simplification. 相似文献
11.
In this paper we present an algorithm for finding an optimum assignment for an n× n matrix M in n iterations. The method uses systematic permutations on the rows of M and is based on the properties of optimum assignments. The implementation presented in the paper requires at most O( n
3) in time and n
2+6 n memory locations for solving a dense n× n problem.This work was supported by the National Science Foundation Grant NSF ENG 74-19788. 相似文献
12.
In this paper, a sequential algorithm is presented to find all cut-vertices on trapezoid graphs. To every trapezoid graph G there is a corresponding trapezoid representation. If all the 4 n corner points of n trapezoids, in a trapezoid representation of a trapezoid graph G with n vertices, are given, then the proposed sequential algorithm runs in O( n) time. Parallel implementation of this algorithm can be done in O(log n) time using O( n/ log n) processors on an EREW PRAM model. 相似文献
13.
This paper presents a polynomial-time dual simplex algorithm for the generalized circulation problem. An efficient implementation
of this algorithm is given that has a worst-case running time of O( m
2( m+ nlog n)log B), where n is the number of nodes, m is the number of arcs and B is the largest integer used to represent the rational gain factors and integral capacities in the network. This running time
is as fast as the running time of any combinatorial algorithm that has been proposed thus far for solving the generalized
circulation problem.
Received: June 1998 / Accepted: June 27, 2001?Published online September 17, 2001 相似文献
14.
Split trees are suitable data structures for storing records with different access frequencies. Under assumption that the access frequencies are all distinct, Huang has proposed an O( n
4 log m) time algorithm to construct an ( m+1)-way split tree for a set of n keys. In this paper, we generalize Huang's algorithm to deal with the case of non-distinct access frequencies. The technique used in the generalized algorithm is a generalization of Hester et al.'s, where the binary case was considered. The generalized algorithm runs in O( n
5 log m) time. 相似文献
15.
In this paper we present an efficient algorithm to test if two given
paths are homotopic; that is, whether they wind around obstacles in
the plane in the same way. For paths specified by n line segments
with obstacles described by n points, several standard ways
achieve quadratic running time. For simple paths, our algorithm
runs in O( n log n) time, which we show is tight. For
self-intersecting paths the problem is related to Hopcrofts
problem; our algorithm runs in O( n
3/2log n) time. 相似文献
16.
The currently (asymptotically) fastest algorithm for minimum dominating set on graphs of n nodes is the trivial Ω(2 n) algorithm which enumerates and checks all the subsets of nodes. In this paper we present a simple algorithm which solves this problem in O(1.81 n) time. 相似文献
17.
Recently, É. Tardos gave a strongly polynomial algorithm for the minimum-cost circulation problem and solved the open problem posed in 1972 by J. Edmonds and R.M. Karp. Her algorithm runs in O( m
2
T( m, n) log m) time, where m is the number of arcs, n is the number of vertices, and T( m, n) is the time required for solving a maximum flow problem in a network with m arcs and n vertices. In the present paper, taking an approach that is a dual of Tardos's, we also give a strongly polynomial algorithm for the minimum-cost circulation problem. Our algorithm runs in O( m
2
S( m, n) log m) time and reduces the computational complexity, where S( m, n) is the time required for solving a shortest path problem with a fixed origin in a network with m arcs, n vertices, and a nonnegative arc length function. The complexity is the same as that of Orlin's algorithm, recently developed by efficiently implementing the Edmonds-Karp scaling algorithm. 相似文献
18.
An Algol 60 program for determining the automorphism partitioning of an undirected unlabelled graph is presented. For graphs that do not contain strongly regular subgraphs, the time is at worst O( n
4) where n is the number of vertices in the graph. The algorithm is based on the Corneil-Gotlieb conjecture. 相似文献
19.
Developing a polynomial time primal network simplex algorithm for the minimum cost flow problem has been a long standing open
problem. In this paper, we develop one such algorithm that runs in O(min( n
2m log nC, n
2m 2 log n)) time, where n is the number of nodes in the network, m is the number of arcs, and C denotes the maximum absolute arc costs if arc costs are integer and ∞ otherwise. We first introduce a pseudopolynomial variant
of the network simplex algorithm called the “premultiplier algorithm”. We then develop a cost-scaling version of the premultiplier
algorithm that solves the minimum cost flow problem in O(min( nm log nC, nm
2 log n)) pivots. With certain simple data structures, the average time per pivot can be shown to be O( n). We also show that the diameter of the network polytope is O( nm log n). 相似文献
20.
In this paper we compare the efficiency of several simplicial variable dimension algorithms. To do so, we first treat the
issues of degeneracy and accelerating. We present a device for solving degeneracy. Furthermore we compare several accelerating
techniques. The technique of iterated quasi-Newton steps after each major cycle of the simplicial algorithm is implemented
in a computer code, which is used to compare the efficiency of the ( n+1)-ray, 2 n-ray, 2
n
-ray and (3
n
−1)-ray algorithms. Except for the ( n+1)-ray algorithm, the number of function evaluations does not differ very much between the various algorithms. It appeared,
however, that the 2
n
-algorithm needs considerably less computation time. 相似文献
|