首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
Two-dimensional arrays can be compared by a generalization of dynamic programming algorithms for string comparison. Earlier algorithms have computational complexity O(N6) for comparison of two N × N arrays. The computational complexity is reduced to O(N4) in general and O(N2) algorithms are pointed out for the range limited case. An example is given to illustrate the lack of knowledge of mathematical properties of these algorithms. The problem of finding an algorithm to compute the minimum number of insertions, deletions, and substitutions to transform one array into another remains open.  相似文献   

2.
We present an inversion algorithm for the solution of a generic N X N Toeplitz system of linear equations with computational complexity O(Nlog2N) and storage requirements O(N). The algorithm relies upon the known structure of Toeplitz matrices and their inverses and achieves speed through a doubling method. All the results are derived and stated in terms of the recent concept of displacement rank, and this is used to extend the scope of the algorithm to include a wider class of matrices than just Toeplitz and also to include block Toeplitz matrices.  相似文献   

3.
Let N be a regular chain-group on E (see W. T. Tutte, Canad. J. Math.8 (1956), 13–28); for instance, N may be the group of integer flows or tensions of a directed graph with edge-set E). It is known that the number of proper Zλ-chains of N (λ ∈ Z, λ ≥ 2) is given by a polynomial in λ, P(N, λ) (when N is the chain-group of integer tensions of the connected graph G, λP(N, λ) is the usual chromatic polynomial of G). We prove the formula: P(N, λ) = Σ[E′]∈O(N)+/~Q(R[E′](N), λ), where O(N)+ is the set of orientations of N with a proper positive chain, ~ is a simple equivalence relation on O(N)+ (sequence of reversals of positive primitive chains), and Q(R[E′](N), λ) is the number of chains with values in [1, λ ? 1] in any reorientation of N associated to an element of [E′]. Moreover, each term Q(R[E′](N), λ) is a polynomial in λ. As applications we obtain: P(N, 0) = (?1)r(N)O(N)+/~∥; P(N, ?1) = (?1)r(N)O(N)+∥ (a result first proved by Brylawski and Lucas); P(N, λ + 1) ≥ P(N, λ) for λ ≥ 2, λ ∈ Z. Our result can also be considered as a refinement of the following known fact: A regular chain-group N has a proper Zλ-chain iff it has a proper chain in [?λ + 1, λ ? 1].  相似文献   

4.
This paper presents a new directional multilevel algorithm for solving N-body or N-point problems with highly oscillatory kernels. We address the problem by first proving that the interaction between a ball of radius r and a well-separated region has an approximate low rank representation, as long as the well-separated region belongs to a cone with a spanning angle of O(1/r) and is at a distance which is at least O(r2) away from the ball. Based on this representation, our algorithm organizes the high frequency computation using a multidirectional and multiscale strategy. Our algorithm is proved to have an optimal O(NlogN) computational complexity for any given accuracy when the points are sampled from a two-dimensional surface.  相似文献   

5.
Ramanujan numbers were introduced in [2] to implement discrete fourier transform (DFT) without using any multiplication operation. Ramanujan numbers are related to π and integers which are powers of 2. If the transform sizeN, is a Ramanujan number, then the computational complexity of the algorithms used for computing isO(N 2) addition and shift operations, and no multiplications. In these algorithms, the transform can be computed sequentially with a single adder inO(N 2) addition times. Parallel implementation of the algorithm can be executed inO(N) addition times, withO(N) number of adders. Some of these Ramanujan numbers of order-2 are related to the Biblical and Babylonian values of π [1]. In this paper, we analytically obtain upper bounds on the degree of approximation in the computation of DFT if JV is a prime Ramanujan number.  相似文献   

6.
Two algorithms to compute the shortest collision-free paths in the Euclidean plane are presented. The ƒ obstacles assumed to be described by disjoint convex polygons having N vertices in total. After preprocessing time O(N+flogN), a suboptimal shortest path between two arbitrary query points can be found in O(f+NlogN) time using Dijkstra's algorithm and in Θ(N) time using the A1 algorithm. The space complexity is O(N+f).  相似文献   

7.
In this paper, we introduce a coupled approach of local discontinuous Galerkin and standard finite element method for solving singularly perturbed convection-diffusion problems. On Shishkin mesh with linear elements, a rate O(N-1lnN) in an associated norm is established, where N is the number of elements. Numerical experiments complement the theoretical results. Moreover, a rate O(N-2ln2N) in a discrete L norm, and O(N-2) in L2 norm, are observed numerically on the Shishkin mesh.  相似文献   

8.
Let a given collection of sets have size N measured by the sum of the cardinalities. Yellin and Jutla presented an algorithm which constructed the partial order induced by the subset relation (a “subset graph”) in a claimed O(N2/log N) operations over a dictionary ADT, and exhibited a collection whose subset graph had Θ(N2/log2 N) edges. This paper first establishes a matching upper bound on the number of edges in a subset graph. It also presents a finer analysis of the algorithm, which confirms the claimed upper bound and shows it to be tight. A simple implementation requiring O(1) bit-parallel operations per ADT operation is presented, along with a variant of the algorithm with an implementation requiring O(N2/log N) RAM operations.  相似文献   

9.
We describe a systolic algorithm for solving a Toeplitz least-squares problem of special form. Such problems arise, for example, when Volterra convolution equations of the first kind are solved by regularization. The systolic algorithm is based on a sequential algorithm of Eldén, but we show how the storage requirements of Eldén's algorithm can be reduced from O(n2) to O(n). The sequential algorithm takes time O(n2); the systolic algorithm takes time O(n) using a linear systolic array of O(n) cells. We also show how large problems may be decomposed and solved on a small systolic array.  相似文献   

10.
《Journal of Complexity》1996,12(2):81-115
Given a univariate polynomialf(z) of degreenwith complex coefficients, whose norms are less than 2min magnitude, the root problem is to find all the roots off(z) up to specified precision 2−μ. Assuming the arithmetic model for computation, we provide an algorithm which has complexityO(nlog5nlogB), whereb= χ + μ, and χ = max{n,m}. This improves on the previous best known algorithm of Pan for the problem, which has complexityO(n2log2nlog(m+ μ)). A remarkable property of our algorithm is that it does not require any assumptions about the root separation off, which were either explicitly, or implicitly, required by previous algorithms. Moreover it also has a work-efficient parallel implementation. We also show that both the sequential and parallel implementations of the algorithm work without modification in the Boolean model of arithmetic. In this case, it follows from root perturbation estimates that we need only specify θ = ⌈n(B+ logn+ 3)⌉ bits of the binary representations of the real and imaginary parts of each of the coefficients off. We also show that by appropriate rounding of intermediate values, we can bound the number of bits required to represent all complex numbers occurring as intermediate quantities in the computation. The result is that we can restrict the numbers we use in every basic arithmetic operation to those having real and imaginary parts with at most φ bits, where[formula]and[formula]Thus, in the Boolean model, the overall work complexity of the algorithm is only increased by a multiplicative factor ofM(φ) (whereM(ψ) =O(ψ(log ψ) log log ψ) is the bit complexity for multiplication of integers of length ψ). The key result on which the algorithm is based, is a new theorem of Coppersmith and Neff relating the geometric distribution of the zeros of a polynomial to the distribution of the zeros of its high order derivatives. We also introduce several new techniques (splitting sets and “centered” points) which hinge on it. We also observe that our root finding algorithm can be efficiently parallelized to run in parallel timeO(log6nlogB) usingnprocessors.  相似文献   

11.
An image consists of many discrete pixels with greyness of different levels, which can be quantified by greyness values. The greyness values at a pixel can also be represented by an integral as the mean of continuous greyness functions over a small pixel region. Based on such an idea, the discrete images can be produced by numerical integration; several efficient algorithms are developed to convert images under transformations. Among these algorithms, the combination of splitting–shooting–integrating methods (CSIM) is most promising because no solutions of nonlinear equations are required for the inverse transformation. The CSIM is proposed in [6] to facilitate images and patterns under a cycle transformations T−1T, where T is a nonlinear transformation. When a pixel region in two dimensions is split into N2 subpixels, convergence rates of pixel greyness by CSIM are proven in [8] to be only O(1/N). In [10], the convergence rates Op(1/N1.5) in probability and Op(1/N2) in probability using a local partition are discovered. The CSIM is well suited to binary images and the images with a few greyness levels due to its simplicity. However, for images with large (e.g., 256) multi-greyness levels, the CSIM still needs more CPU time since a rather large division number is needed.In this paper, a partition technique for numerical integration is proposed to evaluate carefully any overlaps between the transformed subpixel regions and the standard square pixel regions. This technique is employed to evolve the CSIM such that the convergence rate O(1/N2) of greyness solutions can be achieved. The new combinations are simple to carry out for image transformations because no solutions of nonlinear equations are involved in, either. The computational figures for real images of 256×256 with 256 greyness levels display that N=4 is good enough for real applications. This clearly shows validity and effectiveness of the new algorithms in this paper.  相似文献   

12.
Consider the resource allocation problem:minimize ∑ni=1 fi(xi) subject to ∑ni=1 xi = N and xi's being nonnegative integers, where each fi is a convex function. The well-known algorithm based on the incremental method requires O(N log n + n) time to solve this problem. We propose here a new algorithm based on the Lagrange multiplier method, requiring O[n2(log N)2] time. The latter is faster if N is much larger than n. Such a situation occurs, for example, when the optimal sample size problem related to monitoring the urban air pollution is treated.  相似文献   

13.
The phase I maximum flow and most positive cut methods are used to solve the feasibility problem. Both of these methods take one maximum flow computation. Thus the feasibility problem can be solved using maximum flow algorithms. Let n and m be the number of nodes and arcs, respectively. In this paper, we present an algorithm to solve the feasibility problem with integer lower and upper bounds. The running time of our algorithm is O(mn log (nU)), where U is the value of maximum upper bound. Our algorithm improves the O(m2 log (nU))-time algorithm in [12]. Hence the current algorithm improves the running time in [12] by a factor of n. Sleator and Goldberg’s algorithm is one of the well-known maximum flow algorithms, which runs in O(mn log n) time, see [5]. Under similarity assumption [11], our algorithm runs in O(mn log n) time, which is the running time of Sleator and Goldberg’s algorithm. The merit of our algorithm is that, in the case of infeasibility of the given network, it not only diagnoses infeasibility but also presents some information that is useful to modeler in estimating the maximum cost of adjusting the infeasible network.  相似文献   

14.
It takes of the order of N3 operations to solve a set of N linear equations in N unknowns or to invert the corresponding coefficient matrix. When the underlying physical problem has some time- or shift-invariance properties, the coefficient matrix is of Toeplitz (or difference or convolution) type and it is known that it can be inverted with O(N2) operations. However non-Toeplitz matrices often arise even in problems with some underlying time-invariance, e.g., as inverses or products or sums of products of possibly rectangular Toeplitz matrices. These non-Toeplitz matrices should be invertible with a complexity between O(N2) and O(N3). In this paper we provide some content for this feeling by introducing the concept of displacement ranks, which serve as a measure of how ‘close’ to Toeplitz a given matrix is.  相似文献   

15.
Given an N × N array of 0s and 1s, the closest pair problem is to determine the minimum distance between any pair of ones. Let D be this minimum distance (or D = 2N if there are fewer than two 1s). Two solutions to this problem are given, one requiring O(log(N) + D) time and the other O(log(N)). These solutions are for two types of parallel computers arranged in a pyramid fashion with the base of the pyramid containing the matrix. The results improve upon an algorithm of Dyer that requires O(N) time on a more powerful computer.  相似文献   

16.
We propose and develop an efficient implementation of the robust tabu search heuristic for sparse quadratic assignment problems. The traditional implementation of the heuristic applicable to all quadratic assignment problems is of O(N2) complexity per iteration for problems of size N. Using multiple priority queues to determine the next best move instead of scanning all possible moves, and using adjacency lists to minimize the operations needed to determine the cost of moves, we reduce the asymptotic (N → ∞) complexity per iteration to O(N log N). For practical sized problems, the complexity is O(N).  相似文献   

17.
We study exact algorithms for the MAX-CUT problem. Introducing a new technique, we present an algorithmic scheme that computes a maximum cut in graphs with bounded maximum degree. Our algorithm runs in time O*(2(1-(2/Δ))n). We also describe a MAX-CUT algorithm for general graphs. Its time complexity is O*(2mn/(m+n)). Both algorithms use polynomial space.  相似文献   

18.
Sequential clustering aims at determining homogeneous and/or well-separated clusters within a given set of entities, one at a time, until no more such clusters can be found. We consider a bi-criterion sequential clustering problem in which the radius of a cluster (or maximum dissimilarity between an entity chosen as center and any other entity of the cluster) is chosen as a homogeneity criterion and the split of a cluster (or minimum dissimilarity between an entity in the cluster and one outside of it) is chosen as a separation criterion. An O(N 3) algorithm is proposed for determining radii and splits of all efficient clusters, which leads to an O(N 4) algorithm for bi-criterion sequential clustering with radius and split as criteria. This algorithm is illustrated on the well known Ruspini data set.  相似文献   

19.
A technique is shown for speeding up parallel evaluation of functions. An algorithm is presented that evaluates powers xn in time O(√log n) using O(n) processors and certain preprocessed data, while the known algorithms take [log2n] steps of multiplication or addition.  相似文献   

20.
We address the problem of computing homotopic shortest paths in the presence of obstacles in the plane. Problems on homotopy of paths received attention very recently [Cabello et al., in: Proc. 18th Annu. ACM Sympos. Comput. Geom., 2002, pp. 160–169; Efrat et al., in: Proc. 10th Annu. European Sympos. Algorithms, 2002, pp. 411–423]. We present two output-sensitive algorithms, for simple paths and non-simple paths. The algorithm for simple paths improves the previous algorithm [Efrat et al., in: Proc. 10th Annu. European Sympos. Algorithms, 2002, pp. 411–423]. The algorithm for non-simple paths achieves O(log2n) time per output vertex which is an improvement by a factor of O(n/log2n) of the previous algorithm [Hershberger, Snoeyink, Comput. Geom. Theory Appl. 4 (1994) 63–98], where n is the number of obstacles. The running time has an overhead O(n2+) for any positive constant . In the case k<n2+, where k is the total size of the input and output, we improve the running to O((n+k+(nk)2/3)logO(1)n).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号