首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
Parallel computation offers a challenging opportunity to speed up the time consuming enumerative procedures that are necessary to solve hard combinatorial problems. Theoretical analysis of such a parallel branch and bound algorithm is very hard and empirical analysis is not straightforward because the performance of a parallel algorithm cannot be evaluated simply by executing the algorithm on a few parallel systems. Among the difficulties encountered are the noise produced by other users on the system, the limited variation in parallelism (the number of processors in the system is strictly bounded) and the waste of resources involved: most of the time, the outcomes of all computations are already known and the only issue of interest is when these outcomes are produced.We will describe a way to simulate the execution of parallel branch and bound algorithms on arbitrary parallel systems in such a way that the memory and cpu requirements are very reasonable. The use of simulation has only minor consequences for the formulation of the algorithm.  相似文献   

2.
A new graph triconnectivity algorithm and its parallelization   总被引:1,自引:0,他引:1  
We present a new algorithm for finding the triconnected components of an undirected graph. The algorithm is based on a method of searching graphs called open ear decomposition. A parallel implementation of the algorithm on a CRCW PRAM runs inO(log2 n) parallel time usingO(n+m) processors, wheren is the number of vertices andm is the number of edges in the graph.A preliminary version of this paper was presented at the19th Annual ACM Symposium on Theory of Computing, New York, NY, May 1987.Supported by NSF Grant DCR 8514961.Supported by NSF Grant ECS 8404866 and the Semiconductor Research Corporation Grant 86-12-109.  相似文献   

3.
In this paper we present anO (log5 n) time parallel algorithm for constructing a Maximal Path in an undirected graph. We also give anO (log1/2+ε) time parallel algorithm for constructing a depth first search tree in an undirected graph. This work was supported in part by an IBM Faculty Development Award, an NSF Graduate Fellowship, and NSF grant DCR-8351757.  相似文献   

4.
In this paper we consider the worst case ratio between the capacity of min-cuts and the value of max-flow for multicommodity flow problems. We improve the best known bounds for the min-cut max-flow ratio for multicommodity flows in undirected graphs, by replacing theO(logD) in the bound byO(logk), whereD denotes the sum of all demands, andk demotes the number of commodities. In essence we prove that up to constant factors the worst min-cut max-flow ratios appear in problems where demands are integral and polynomial in the number of commodities.Klein, Rao, Agrawal, and Ravi have previously proved that if the demands and the capacities are integral, then the min-cut max-flow ratio in general undirected graphs is bounded byO(logClogD), whereC denotes the sum of all the capacities. Tragoudas has improved this bound toO(lognlogD), wheren is the number of nodes in the network. Garg, Vazirani and Yannakakis further improved this toO(logklogD). Klein, Plotkin and Rao have proved that for planar networks, the ratio isO(logD).Our result improves the bound for general networks toO(log2 k) and the bound for planar networks toO(logk). In both cases our result implies the first non-trivial bound that is independent of the magnitude of the numbers involved. The method presented in this paper can be used to give polynomial time approximation algorithms to the minimum cuts in the network up to the above factors.Preliminary version appeared in Proceedings of the 25th Annual ACM Symposium on the Theory of Computing, 1993, 691-697.Research supported by U.S. Army Research Office Grant DAAL-03-91-G-0102, and by a grant from Mitsubishi Electric Laboratories.Research supported in part by a Packard Fellowship, an NSF PYI award, a Sloan Fellowship, and by the National Science Foundation, the Air Force Office of Scientific Research, and the Office of Naval Research, through NSF grant DMS-8920550.  相似文献   

5.
Pseudorandom generators for space-bounded computation   总被引:4,自引:0,他引:4  
Noam Nisan 《Combinatorica》1992,12(4):449-461
Pseudorandom generators are constructed which convertO(SlogR) truly random bits toR bits that appear random to any algorithm that runs inSPACE(S). In particular, any randomized polynomial time algorithm that runs in spaceS can be simulated using onlyO(Slogn) random bits. An application of these generators is an explicit construction of universal traversal sequences (for arbitrary graphs) of lengthn O(logn).The generators constructed are technically stronger than just appearing random to spacebounded machines, and have several other applications. In particular, applications are given for deterministic amplification (i.e. reducing the probability of error of randomized algorithms), as well as generalizations of it.This work was done in the Laboratory for Computer Science, MIT, supported by NSF 865727-CCR and ARO DALL03-86-K-017  相似文献   

6.
We consider zero knowledge interactive proofs in a richer, more realistic communication environment. In this setting, one may simultaneously engage in many interactive proofs, and these proofs may take place in an asynchronous fashion. It is known that zero-knowledge is not necessarily preserved in such an environment; we show that for a large class of protocols, it cannot be preserved. Any 4 round (computational) zero-knowledge interactive proof (or argument) for a non-trivial language L is not black-box simulatable in the asynchronous setting.* An abridge version of this work has appeared in [24].  相似文献   

7.
Sabadini, Walters and others have developed a categorical, machine based theory of concurrency in which there are four essential aspects: a distributive category of data-types; a bicategory Mach whose objects are data types, and whose arrows are input-output machines built from data types; a semantic category (or categories) Sem, suitable to contain the behaviors of machines, and a functor, behavior: MachSem. Suitable operations on machines and semantics are found so that the behavior functor preserves these operations. Then, if each machine is decomposable into primitive machines using these operations, the behavior of a general machine is deducible from the behavior of its parts. The theory of non-deterministic finite state automata provides an example of the paradigm and also throws some light on the classical theory of finite state automata.We describe a bicategory whose objects are natural numbers, in which an arrow M: np is a finite state automaton with n input states, p output states, and some additional internal states; we require that no transitions begin at output states or end at input states. A machine is represented by an q+n by q+p matrix. The bicategory supports additional operations: non-deterministic choice, parallel interleaving, and feedback. Enough operations are imposed on machines to show that each machine may be obtained from some atomic ones by means of the operations.The semantic category is the (Bloom-Ésik) iteration theory Mat (X whose objects are natural numbers and whose arrows from n to p are n×p matrices with entries in the semiring of languages. The behavior functor associates to a machine M: np a matrix |M| of languages, one language to each pair of input and output states. Behavior preserves composition, feedback, takes non-deterministic choice to union, and parallel-interleaving to shuffle. Thus, behavior gives a compositional semantics to a primitive notion of concurrent processes.This work has been supported by the Australian Research Council, by CEC under grant number 6317, ASMICS II, by Italian MURST, and by the Italian CNR.Visit to Sydney supported by a grant from the Australian Research Council.Presented at the European Colloquium of Category Theory, Tours, France, 25–31 July 1994.  相似文献   

8.
Multi-objective evolutionary algorithms for the construction of neural ensembles is a relatively new area of research. We recently proposed an ensemble learning algorithm called DIVACE (DIVerse and ACcurate Ensemble learning algorithm). It was shown that DIVACE tries to find an optimal trade-off between diversity and accuracy as it searches for an ensemble for some particular pattern recognition task by treating these two objectives explicitly separately. A detailed discussion of DIVACE together with further experimental studies form the essence of this paper. A new diversity measure which we call Pairwise Failure Crediting (PFC) is proposed. This measure forms one of the two evolutionary pressures being exerted explicitly in DIVACE. Experiments with this diversity measure as well as comparisons with previously studied approaches are hence considered. Detailed analysis of the results show that DIVACE, as a concept, has promise.  相似文献   

9.
We show that Closest Substring, one of the most important problems in the field of consensus string analysis, is W[1]-hard when parameterized by the number k of input strings (and remains so, even over a binary alphabet). This is done by giving a “strongly structure-preserving” reduction from the graph problem Clique to Closest Substring. This problem is therefore unlikely to be solvable in time O(f(k)•nc) for any function f of k and constant c independent of k, i.e., the combinatorial explosion seemingly inherent to this NP-hard problem cannot be restricted to parameter k. The problem can therefore be expected to be intractable, in any practical sense, for k ≥ 3. Our result supports the intuition that Closest Substring is computationally much harder than the special case of Closest String, althoughb othp roblems are NP-complete. We also prove W[1]-hardness for other parameterizations in the case of unbounded alphabet size. Our W[1]-hardness result for Closest Substring generalizes to Consensus Patterns, a problem arising in computational biology. * An extended abstract of this paper was presented at the 19th International Symposium on Theoretical Aspects of Computer Science (STACS 2002), Springer-Verlag, LNCS 2285, pages 262–273, held in Juan-Les-Pins, France, March 14–16, 2002. † Work was supported by the Deutsche Forschungsgemeinschaft (DFG), research project “OPAL” (optimal solutions for hard problems in computational biology), NI 369/2. ‡ Work was done while the author was with Wilhelm-Schickard-Institut für Informatik, Universit?t Tübingen. Work was partially supported by the Deutsche Forschungsgemeinschaft (DFG), Emmy Noether research group “PIAF” (fixed-parameter algorithms), NI 369/4.  相似文献   

10.
In this paper we present a fast parallel algorithm for constructing a depth first search tree for an undirected graph. The algorithm is anRNC algorithm, meaning that it is a probabilistic algorithm that runs in polylog time using a polynomial number of processors on aP-RAM. The run time of the algorithm isO(T MM(n) log3 n), and the number of processors used isP MM (n) whereT MM(n) andP MM(n) are the time and number of processors needed to find a minimum weight perfect matching on ann vertex graph with maximum edge weightn.This research was done while the first author was visiting the Mathematical Research Institute in Berkeley. Research supported in part by NSF grant 8120790.Supported by Air Force Grant AFOSR-85-0203A.  相似文献   

11.
We consider the cost of estimating an error bound for the computed solution of a system of linear equations, i.e., estimating the norm of a matrix inverse. Under some technical assumptions we show that computing even a coarse error bound for the solution of a triangular system of equations costs at least as much as testing whether the product of two matrices is zero. The complexity of the latter problem is in turn conjectured to be the same as matrix multiplication, matrix inversion, etc. Since most error bounds in practical use have much lower complexity, this means they should sometimes exhibit large errors. In particular, it is shown that condition estimators that: (1) perform at least one operation on each matrix entry; and (2) are asymptotically faster than any zero tester, must sometimes over or underestimate the inverse norm by a factor of at least , where n is the dimension of the input matrix, k is the bitsize, and where either or grows faster than any polynomial in n . Our results hold for the RAM model with bit complexity, as well as computations over rational and algebraic numbers, but not real or complex numbers. Our results also extend to estimating error bounds or condition numbers for other linear algebra problems such as computing eigenvectors. September 10, 1999. Final version received: August 23, 2000.  相似文献   

12.
Improved Low-Degree Testing and its Applications   总被引:1,自引:0,他引:1  
NP = PCP(log n, 1) and related results crucially depend upon the close connection between the probability with which a function passes a low degree test and the distance of this function to the nearest degree d polynomial. In this paper we study a test proposed by Rubinfeld and Sudan [30]. The strongest previously known connection for this test states that a function passes the test with probability for some > 7/8 iff the function has agreement with a polynomial of degree d. We present a new, and surprisingly strong, analysis which shows that the preceding statement is true for arbitrarily small , provided the field size is polynomially larger than d/. The analysis uses a version of Hilbert irreducibility, a tool of algebraic geometry.As a consequence we obtain an alternate construction for the following proof system: A constant prover 1-round proof system for NP languages in which the verifier uses O(log n) random bits, receives answers of size O(log n) bits, and has an error probability of at most . Such a proof system, which implies the NP-hardness of approximating Set Cover to within (log n) factors, has already been obtained by Raz and Safra [29]. Raz and Safra obtain their result by giving a strong analysis, in the sense described above, of a new low-degree test that they present.A second consequence of our analysis is a self tester/corrector for any buggy program that (supposedly) computes a polynomial over a finite field. If the program is correct only on fraction of inputs where , then the tester/corrector determines and generates values for every input, such that one of them is the correct output. In fact, our results yield something stronger: Given the buggy program, we can construct randomized programs such that one of them is correct on every input, with high probability. Such a strong self-corrector is a useful tool in complexity theory—with some applications known.* Supported by an NSF CAREER award, an Alfred P. Sloan Fellowship, and a Packard Fellowship. Part of this work was done when this author was at the IBM Thomas J. Watson Research Center.  相似文献   

13.
The paper builds on both a simply typed term system and a computation model on Scott domains via so-called parallel typed while programs (PTWP). The former provides a notion of partial primitive recursive functional on Scott domains supporting a suitable concept of parallelism. Computability on Scott domains seems to entail that Kleene's schema of higher type simultaneous course-of-values recursion (scvr) is not reducible to partial primitive recursion. So extensions and PTWP are studied that are closed under scvr. The twist are certain type 1 G?del recursors for simultaneous partial primitive recursion. Formally, denotes a function , however, is modelled such that is finite, or in other words, a partial sequence. As for PTWP, the concept of type writable variables is introduced, providing the possibility of creating and manipulating partial sequences. It is shown that the PTWP-computable functionals coincide with those definable in plus a constant for sequential minimisation. In particular, the functionals definable in denoted can be characterised by a subclass of PTWP-computable functionals denoted . Moreover, hierarchies of strictly increasing classes in the style of Heinermann and complexity classes are introduced such that . These results extend those for and PTWP [Nig94]. Finally, scvr is employed to define for each type the enumeration functional of all finite elements of . Received January 30, 1996  相似文献   

14.
Z. Galil  V. Pan 《Combinatorica》1988,8(2):189-200
Our main result improves the known processor bound by a factor ofn 4 (maintaining the expected parallel running time,O(log3 n)) for the following important problem:find a perfect matching in a general or in a bipartite graph with n vertices. A solution to that problem is used in parallel algorithms for several combinatorial problems, in particular for the problems of finding i) a (perfect) matching of maximum weight, ii) a maximum cardinality matching, iii) a matching of maximum vertex weight, iv) a maximums-t flow in a digraph with unit edge capacities. Consequently the known algorithms for those problems are substantially improved.The results of this paper have been presented at the 26-th Annual IEEE Symp. FOCS, Portland, Oregon (October 1985).Partially supported by NSF Grants MCS 8303139 and DCR 8511713.Supporeted by NSF Grants MCS 8203232 and DCR 8507573.  相似文献   

15.
Trevisan showed that many pseudorandom generator constructions give rise to constructions of explicit extractors. We show how to use such constructions to obtain explicit lossless condensers. A lossless condenser is a probabilistic map using only O(logn) additional random bits that maps n bits strings to poly(logK) bit strings, such that any source with support size K is mapped almost injectively to the smaller domain. Our construction remains the best lossless condenser to date. By composing our condenser with previous extractors, we obtain new, improved extractors. For small enough min-entropies our extractors can output all of the randomness with only O(logn) bits. We also obtain a new disperser that works for every entropy loss, uses an O(logn) bit seed, and has only O(logn) entropy loss. This is the best disperser construction to date, and yields other applications. Finally, our lossless condenser can be viewed as an unbalanced bipartite graph with strong expansion properties. * Much of this work was done while the author was in the Computer Science Division, University of California, Berkeley, and supported in part by a David and Lucile Packard Fellowship for Science and Engineering and NSF NYI Grant No. CCR-9457799. The work was also supported in part by an Alon fellowship and by the Israel Science Foundation. † Much of this work was done while the author was a graduate student in the Computer Science Division, University of California, Berkeley. Supported in part by NSF Grants CCR-9820897, CCF-0346991, and an Alfred P. Sloan Research Fellowship. ‡ Much of this work was done while the author was on leave at the Computer Science Division, University of California, Berkeley. Supported in part by a David and Lucile Packard Fellowship for Science and Engineering, NSF Grants CCR-9912428 and CCR-0310960, NSF NYI Grant CCR-9457799, and an Alfred P. Sloan Research Fellowship.  相似文献   

16.
The asymptotic behaviour of a family of gradient algorithms (including the methods of steepest descent and minimum residues) for the optimisation of bounded quadratic operators in ℝd and Hilbert spaces is analyzed. The results obtained generalize those of Akaike (1959) in several directions. First, all algorithms in the family are shown to have the same asymptotic behaviour (convergence to a two-point attractor), which implies in particular that they have similar asymptotic convergence rates. Second, the analysis also covers the Hilbert space case. A detailed analysis of the stability property of the attractor is provided.  相似文献   

17.
We describe an algorithm for selecting the n-th largest element (where 0<<1), from a totally ordered set ofn elements, using at most (1+(1+o(1))H())·n comparisons whereH() is the binary entropy function and theo(1) stands for a function that tends to 0 as tends to 0. For small values of this is almost the best possible as there is a lower bound of about (1+H())·n comparisons. The algorithm obtained beats the global 3n upper bound of Schönhage, Paterson and Pippenger for <1/3.  相似文献   

18.
L. Rónyai 《Combinatorica》1989,9(2):199-206
We consider the problem of factoring polynomials overGF(p) for those prime numbersp for which all prime factors ofp– 1 are small. We show that if we have a primitivet-th root of unity for every primet dividingp– 1 then factoring polynomials overGF(p) can be done in deterministic polynomial time.Research partially supported by Hungarian National Foundation for Scientific Research, Grant 1812.  相似文献   

19.
Xiaotie Deng 《Combinatorica》1996,16(4):453-464
In this paper, we consider the following distributed bipartite matching problem: LetG=(L,R;E) be a bipartite graph in which boys are partL (left nodes), and girls are partR (right nodes.) There is an edge(l i ,r j )E iff boyl i is interested in girlr j . Every boyl i will propose to a girlr j among all those he is interested in, i.e., his adjacent right nodes in the bipartite graphG. If several boys propose to the same girl, only one of them will be accepted by the girl. We assume that none of the boys communicate with others. This matching problem is typical of distributed computing under incomplete information: Each boy only knows his own preference but none of the others. We first study the one-round matching problem: each boy proposes to at most one girl, so that the total number of girls receiving a proposal is maximized. If the maximum matching isM, then no deterministic algorithm can produce a matching of size not bounded by a constant, but a randomized algorithm achieves — and this is shown optimal by an adversary argument. If we allow many rounds in matching, with the senders learning from their failures, then, for deterministic algorithms, the ratio (of the optimal solution to the solution of the algorithm) is unbounded when the number of rounds is smaller than (G), and becomes bounded (two) at the (G)-th round. In contrast, an extension of the one-round randomized algorithm establishes that there is no such discontinuity in the randomized case. This randomized result is also matched by an upper bound of asymptotically the same order.  相似文献   

20.
Tze-Heng Ma  Jeremy Spinrad 《Order》1991,8(2):175-183
Most papers dealing with partial orders assume that the input is given either in transitively closed or transitively reduced form. In this paper, we show that it is possible to solve some problems on partial orders in less time than it takes to perform transitive closure or reduction for general graphs. In particular, we present efficient algorithms for recognizing two dimensional partial orders and N-free partial orders when no assumptions are made about the form of the input.This work was supported by National Science Foundation Grant DCR-8604577 and the Vanderbilt University Research Council.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号