首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Cliquewidth and NLC-width are two closely related parameters that measure the complexity of graphs. Both clique- and NLC-width are defined to be the minimum number of labels required to create a labelled graph by certain terms of operations. Many hard problems on graphs become solvable in polynomial-time if the inputs are restricted to graphs of bounded clique- or NLC-width. Cliquewidth and NLC-width differ at most by a factor of two.The relative counterparts of these parameters are defined to be the minimum number of labels necessary to create a graph while the tree-structure of the term is fixed. We show that Relative Cliquewidth and Relative NLC-width differ significantly in computational complexity. While the former problem is NP-complete the latter is solvable in polynomial time. The relative NLC-width can be computed in O(n3) time, which also yields an exact algorithm for computing the NLC-width in time O(3nn). Additionally, our technique enables a combinatorial characterisation of NLC-width that avoids the usual operations on labelled graphs.  相似文献   

2.
We study makespan minimization on an m machine flowshop. No idle time is allowed between consecutive operations on each machine. We introduce an efficient (O(n2)) greedy algorithm, which is shown numerically to perform better than a recently published heuristic.  相似文献   

3.
We study a problem of scheduling n jobs on a single machine in batches. A batch is a set of jobs processed contiguously and completed together when the processing of all jobs in the batch is finished. Processing of a batch requires a machine setup time dependent on the position of this batch in the batch sequence. Setup times and job processing times are continuously controllable, that is, they are real-valued variables within their lower and upper bounds. A deviation of a setup time or job processing time from its upper bound is called a compression. The problem is to find a job sequence, its partition into batches, and the values for setup times and job processing times such that (a) total job completion time is minimized, subject to an upper bound on total weighted setup time and job processing time compression, or (b) a linear combination of total job completion time, total setup time compression, and total job processing time compression is minimized. Properties of optimal solutions are established. If the lower and upper bounds on job processing times can be similarly ordered or the job sequence is fixed, then O(n3 log n) and O(n5) time algorithms are developed to solve cases (a) and (b), respectively. If all job processing times are fixed or all setup times are fixed, then more efficient algorithms can be devised to solve the problems.  相似文献   

4.
Algorithms for clustering n objects typically require O(n2) operations. This report presents a special approach for a certain class of data that requires O(n) operations and O(n) storage. Such data commonly occur when a microscopic signal structure is imposed on a medium with potential for macroscopic defects, and the signal elements are then checked sequentially for error. The algorithm can be used to cluster other classes of data in O(n log n) operations. An application to videodisc defect consolidation is presented.  相似文献   

5.
The paper deals with the single-machine scheduling problem in which job processing times as well as release dates are controllable parameters and they may vary within given intervals. While all release dates have the same boundary values, the processing time intervals are arbitrary. It is assumed that the cost of compressing processing times and release dates from their initial values is a linear function of the compression amount. The objective is to minimize the makespan together with the total compression cost. We construct a reduction to the assignment problem for the case of equal release date compression costs and develop an O(n2) algorithm for the case of equal release date compression costs and equal processing time compression costs. For the bicriteria version of the latter problem with agreeable processing times, we suggest an O(n2) algorithm that constructs the breakpoints of the efficient frontier.  相似文献   

6.
We describe a probabilistic algorithm for the computation of the gcd of two univariate integer polynomials of degrees ≤ n with their l1-norms being bounded by 2h and estimate its expected running time by a worst-case bound of O(n(n + h)1 + o(1)) bit operations.  相似文献   

7.
Single-Machine Scheduling with Release Times and Tails   总被引:1,自引:0,他引:1  
We study the problem of scheduling jobs with release times and tails on a single machine with the objective to minimize the makespan. This problem is strongly NP-hard, however it is known to be polynomially solvable if all jobs have equal processing time P. We generalize this result and suggest an O(n 2 log nlog P) algorithm for the case when the processing times of some jobs are restricted to either P or 2P.  相似文献   

8.
In this paper, we prove that the Jones polynomial of a link diagram obtained through repeated tangle replacement operations can be computed by a sequence of suitable variable substitutions in simpler polynomials. For the case that all the tangles involved in the construction of the link diagram have at most k crossings (where k is a constant independent of the total number n of crossings in the link diagram), we show that the computation time needed to calculate the Jones polynomial of the link diagram is bounded above by O(nk). In particular, we show that the Jones polynomial of any Conway algebraic link diagram with n crossings can be computed in O(n2) time. A consequence of this result is that the Jones polynomial of any Montesinos link and two bridge knot or link of n crossings can be computed in O(n2) time.  相似文献   

9.
A relation for a finite number of reals is the sequence of integer coefficients, not all zero, of an integral linear combination that represents zero. An iteration of algorithm Alg(n, Z) for n real numbers is stated in four steps. It is proven that Alg(n, Z) requires less than 2n + 2n log(2n2M) iterations to construct an integral relation if one of norm M exists. Each iteration of Alg(n, Z) requires fewer than 5n4 arithmetic operations. Alg(n, Z) comprises an algorithmic characterization of Z-linear independence.  相似文献   

10.
The problem of equivalence of interrupt handling programs in two algebraic models is studied. Program operators are divided into two classes, namely, the classes of basic operators and interrupt handling operators. The first model obeys the absorption law, i.e., each basic operator suppresses the result of application of any interrupt handling operator. This means that, if the interrupt handling is successfully finished and the control is passed to basic operators, then the result of interrupt handling has no influence on the program output. An algorithm that solves the problem of equivalence in the above model in O(n 2log n) operations is suggested. The other model obeys the absorption law and satisfies the commutative property for basic operators. For this model, we also propose an algorithm that checks the equivalence of programs in O(n 4log n) operations.  相似文献   

11.
We give an abstract characterization of algebras of partial functions from A n to A endowed with the operations of the Menger superposition and the set-theoretic difference of functions as subsets of A n+1.  相似文献   

12.
The LDLT factorization of a symmetric indefinite matrix, although efficient computationally, may not exist and can be unstable in the presence of round off error. The use of block diagonal 2×2 pivots is attractive, but there are some difficulties in determining an efficient and stable pivot strategy. Previous suggestions have required O(n>3) operations (either multiplications or comparisons) just to implement the pivot strategy. A new strategy is described which in practice only requires O(n2) operations. Indeed, the effort required by this pivot strategy is less than that required when using partial pivoting with an unsymmetric LU factorization, which is the usual way of factorizing indefinite matrices.  相似文献   

13.
The problem of optimal scheduling n tasks in a parallel processor system is studied. The tasks are malleable, i.e., a task may be executed by several processors simultaneously and the processing speed of a task is a nonlinear function of the number of processors allocated to it. The total number of processors is m and it is an upper bound on the number of processors that can be used by all the tasks simultaneously. It is assumed that the number of processors is sufficient to process all the tasks simultaneously, i.e. nm. The objective is to find a task schedule and a processor allocation such that the overall task completion time, i.e. the makespan, is minimized. The problem is motivated by real-life applications of parallel computer systems in scientific computing of highly parallelizable tasks. An O(n) algorithm is presented to solve this problem when all the processing speed functions are convex. If these functions are all concave and the number of tasks is a constant, the problem can be solved in polynomial time. A relaxed problem, in which the number of processors allocated to each task is not required to be integer, can be solved in O(nmax {m,nlog 2 m}) time. It is proved that the minimum makespan values for the original and relaxed problems coincide. For n=2 or n=3, an optimal solution for the relaxed problem can be converted into an optimal solution for the original problem in a constant time.  相似文献   

14.
《Journal of Complexity》2005,21(1):72-86
We present an inversion algorithm for nonsingular n×n matrices whose entries are degree d polynomials over a field. The algorithm is deterministic and, when n is a power of two, requires O(n3d) field operations for a generic input; the soft-O notation O indicates some missing log(nd) factors. Up to such logarithmic factors, this asymptotic complexity is of the same order as the number of distinct field elements necessary to represent the inverse matrix.  相似文献   

15.
We consider the problem of scheduling n jobs on m parallel machines with inclusive processing set restrictions. Each job has a given release date, and all jobs have equal processing times. The objective is to minimize the makespan of the schedule. Li and Li (2015) have developed an O(n2+mn log?n) time algorithm for this problem. In this note, we present a modified algorithm with an improved time complexity of O(min{m, log?n} ? n log?n).  相似文献   

16.
Mosheiov and Sidney (2003) showed that the makespan minimization problem with job-dependent learning effects can be formulated as an assignment problem and solved in O(n3) time. We show that this problem can be solved in O(nlog n) time by sequencing the jobs according to the shortest processing time (SPT) order if we utilize the observation that the job-dependent learning rates are correlated with the level of sophistication of the jobs and assume that these rates are bounded from below. The optimality of the SPT sequence is also preserved when the job-dependent learning rates are inversely correlated with the level of sophistication of the jobs and bounded from above.  相似文献   

17.
The quantum Fourier transform (QFT) is a powerful tool in quantum computing. The main ingredients of QFT are formed by the Walsh-Hadamard transform H and phase shifts P(·), both of which are 2×2 unitary matrices as operators on the two-dimensional 1-qubit space. In this paper, we show that H and P(·) suffice to generate the unitary group U(2) and, consequently, through controlled-U operations and their concatenations, the entire unitary group U(2n) on n qubits can be generated. Since any quantum computing algorithm in an n-qubit quantum computer is based on operations by matrices in U(2n), in this sense we have the universality of the QFT.  相似文献   

18.
We propose a new Broyden method for solving systems of nonlinear equations, which uses the first derivatives, but is more efficient than the Newton method (measured by the computational time) for larger dense systems. The new method updates QR or LU decompositions of nonsymmetric approximations of the Jacobian matrix, so it requires O(n 2) arithmetic operations per iteration in contrast with the Newton method, which requires O(n 3) operations per iteration. Computational experiments confirm the high efficiency of the new method.  相似文献   

19.
An algorithm for computing the cubic spline interpolation coefficients without solving the matrix equation involved is presented in this paper. It requires only O(n) multiplication or division operations for computing the inverse of the matrix compared to O(n2) or larger number of operations in the Gauss elimination method.  相似文献   

20.
In 1990, D. Coppersmith and S. Winograd published an estimate of the amount of arithmetic operations necessary for the multiplication of square matrices n?×?n, which equals O(n 2.3755). In this article, we make a systematization of the theoretical instruments that were used by D. Coppersmith and S. Winograd for their estimate. The improved estimate O(n 2.373) is one of the results of this systematization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号