首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Proofs of classical Chernoff-Hoeffding bounds has been used to obtain polynomial-time implementations of Spencer's derandomization method of conditional probabilities on usual finite machine models: given m events whose complements are large deviations corresponding to weighted sums of n mutually independent Bernoulli trials. Raghavan's lattice approximation algorithm constructs for 0-1 weights, and integer deviation terms in O(mn)-time a point for which all events hold. For rational weighted sums of Bernoulli trials the lattice approximation algorithm or Spencer's hyperbolic cosine algorithm are deterministic precedures, but a polynomial-time implementation was not known. We resolve this problem with an O(mn2log $ {{mn}\over{\epsilon}} $)-time algorithm, whenever the probability that all events hold is at least ϵ > 0. Since such algorithms simulate the proof of the underlying large deviation inequality in a constructive way, we call it the algorithmic version of the inequality. Applications to general packing integer programs and resource constrained scheduling result in tight and polynomial-time approximations algorithms. © 1996 John Wiley & Sons, Inc.  相似文献   

2.
3.
This paper presents some simple technical conditions that guarantee the convergence of a general class of adaptive stochastic global optimization algorithms. By imposing some conditions on the probability distributions that generate the iterates, these stochastic algorithms can be shown to converge to the global optimum in a probabilistic sense. These results also apply to global optimization algorithms that combine local and global stochastic search strategies and also those algorithms that combine deterministic and stochastic search strategies. This makes the results applicable to a wide range of global optimization algorithms that are useful in practice. Moreover, this paper provides convergence conditions involving the conditional densities of the random vector iterates that are easy to verify in practice. It also provides some convergence conditions in the special case when the iterates are generated by elliptical distributions such as the multivariate Normal and Cauchy distributions. These results are then used to prove the convergence of some practical stochastic global optimization algorithms, including an evolutionary programming algorithm. In addition, this paper introduces the notion of a stochastic algorithm being probabilistically dense in the domain of the function and shows that, under simple assumptions, this is equivalent to seeing any point in the domain with probability 1. This, in turn, is equivalent to almost sure convergence to the global minimum. Finally, some simple results on convergence rates are also proved.  相似文献   

4.
It is proved that computing the subordinate matrix norm ∥A∥∞1 is NP-hard, Even more, existence of a polynomial-time algorithm for computing this norm with relative accuracy less than 1/(4n2 ), where n is matrix size, implies P = NP.  相似文献   

5.
Stuart  A. M. 《Numerical Algorithms》1997,14(1-3):227-260
The numerical solution of initial value problems for ordinary differential equations is frequently performed by means of adaptive algorithms with user-input tolerance τ. The time-step is then chosen according to an estimate, based on small time-step heuristics, designed to try and ensure that an approximation to the local error commited is bounded by τ. A question of natural interest is to determine how the global error behaves with respect to the tolerance τ. This has obvious practical interest and also leads to an interesting problem in mathematical analysis. The primary difficulties arising in the analysis are that: (i) the time-step selection mechanisms used in practice are discontinuous as functions of the specified data; (ii) the small time-step heuristics underlying the control of the local error can break down in some cases. In this paper an analysis is presented which incorporates these two difficulties. For a mathematical model of an error per unit step or error per step adaptive Runge–Kutta algorithm, it may be shown that in a certain probabilistic sense, with respect to a measure on the space of initial data, the small time-step heuristics are valid with probability one, leading to a probabilistic convergence result for the global error as τ→0. The probabilistic approach is only valid in dimension m>1 this observation is consistent with recent analysis concerning the existence of spurious steady solutions of software codes which highlights the difference between the cases m=1 and m>1. The breakdown of the small time-step heuristics can be circumvented by making minor modifications to the algorithm, leading to a deterministic convergence proof for the global error of such algorithms as τ→0. An underlying theory is developed and the deterministic and probabilistic convergence results proved as particular applications of this theory. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

6.
For a number of computational search problems, the existence of a polynomial-time algorithm for the problem implies that a polynomial-time algorithm for the problem is constructively known. Some instances of such self-witnessing polynomial-time complexity are presented. Our main result demonstrates this property for the problem of computing the prime factorization of a positive integer, based on a lemma which shows that a certificate for primality or compositeness can be constructed for a positive integer p in deterministic polynomial time given a complete factorization of p - 1.  相似文献   

7.
We describe a polynomial time algorithm to compute Jacobi symbols of exponentially large integers of special form, including so-called sparse integers which are exponentially large integers with only polynomially many nonzero binary digits. In a number of papers sequences of Jacobi symbols have been proposed as generators of cryptographically secure pseudorandom bits. Our algorithm allows us to use much larger moduli in such constructions. We also use our algorithm to design a probabilistic polynomial time test which decides if a given integer of the aforementioned type is a perfect square (assuming the Extended Riemann Hypothesis). We also obtain analogues of these results for polynomials over finite fields. Moreover, in this case the perfect square testing algorithm is unconditional. These results can be compared with many known NP-hardness results for some natural problems on sparse integers and polynomials.  相似文献   

8.
《Quaestiones Mathematicae》2013,36(4):307-344
Sattolo's algorithm creates a random cyclic permutation by interchanging pairs of elements in an appropriate manner; the Fisher-Yates algorithm produces random (not necessarily cyclic) permutations in a very similar way. The distributions of the movements of the elements in these two algorithms have already been treated quite extensively in past works. In this paper, we are interested in the joint distribution of two elements j and k; we are able to compute the bivariate generating functions explicitly, although it is quite involved. From it, moments and limiting distributions can be deduced. Furthermore, we compute the probability that elements i and j ever change places in both algorithms.  相似文献   

9.
In this paper, we study the computational complexity and approximation complexity of the connected set-cover problem. We derive necessary and sufficient conditions for the connected set-cover problem to have a polynomial-time algorithm. We also present a sufficient condition for the existence of a (1 +? ln ??)-approximation. In addition, one such (1 +? ln ??)-approximation algorithm for this problem is proposed. Furthermore, it is proved that there is no polynomial-time ${O(\log^{2-\varepsilon} n)}$ -approximation for any ${\varepsilon\,{>}\,0}$ for the connected set-cover problem on general graphs, unless NP has an quasi-polynomial Las-Vegas algorithm.  相似文献   

10.
Given n points in 3D, sampled from k original planes (with sampling errors), a new probabilistic method for detecting coplanar subsets of points in O(k 6) steps is introduced. The planes are reconstructed with small probability of error. The algorithm reduces the problem of reconstruction to the problem of clustering in R 3 and thereby produces effective results. The algorithm is significantly faster than other known algorithms in most cases.  相似文献   

11.
This paper provides new results on computing simultaneous sparse approximations of multichannel signals over redundant dictionaries using two greedy algorithms. The first one, p-thresholding, selects the S atoms that have the largest p-correlation while the second one, p-simultaneous matching pursuit (p-SOMP), is a generalisation of an algorithm studied by Tropp in (Signal Process. 86:572–588, 2006). We first provide exact recovery conditions as well as worst case analyses of all algorithms. The results, expressed using the standard cumulative coherence, are very reminiscent of the single channel case and, in particular, impose stringent restrictions on the dictionary. We unlock the situation by performing an average case analysis of both algorithms. First, we set up a general probabilistic signal model in which the coefficients of the atoms are drawn at random from the standard Gaussian distribution. Second, we show that under this model, and with mild conditions on the coherence, the probability that p-thresholding and p-SOMP fail to recover the correct components is overwhelmingly small and gets smaller as the number of channels increases. Furthermore, we analyse the influence of selecting the set of correct atoms at random. We show that, if the dictionary satisfies a uniform uncertainty principle (Candes and Tao, IEEE Trans. Inf. Theory, 52(12):5406–5425, 2006), the probability that simultaneous OMP fails to recover any sufficiently sparse set of atoms gets increasingly smaller as the number of channels increases.  相似文献   

12.
Two continuous formulations of the maximum independent set problem on a graph G=(V,E) are considered. Both cases involve the maximization of an n-variable polynomial over the n-dimensional hypercube, where n is the number of nodes in G. Two (polynomial) objective functions F(x) and H(x) are considered. Given any solution to x 0 in the hypercube, we propose two polynomial-time algorithms based on these formulations, for finding maximal independent sets with cardinality greater than or equal to F(x0) and H(x0), respectively. A relation between the two approaches is studied and a more general statement for dominating sets is proved. Results of preliminary computational experiments for some of the DIMACS clique benchmark graphs are presented.  相似文献   

13.
14.
We are given n coins of which k are heavy (defective), while the remaining nk are light (good). We know both the weight of the good coins and the weight of the defective ones. Therefore, if we weigh a subset Q ? S with a spring scale, then the outcome will tell us exactly the number of defectives contained in Q. The problem, known as Counterfeit Coins problem, is to identify the set of defective coins by minimizing the number of weighings, also called queries. It is well known that Θ(klog k +1(n/k)) queries are enough, even for non‐adaptive algorithms, in case kcn for some constant 0 < c < 1. A natural interesting generalization arises when we are required to identify any subset of mk defectives. We show that while for randomized algorithms \begin{align*}\tilde{\Theta}(m)\end{align*} queries are sufficient, the deterministic non‐adaptive counterpart still requires Θ(klog k +1(n/k)) queries, in case kn/28; therefore, finding any subset of defectives is not easier than finding all of them by a non‐adaptive deterministic algorithm. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2012  相似文献   

15.
Abstract. We present a deterministic polynomial-time algorithm that computes the mixed discriminant of an n -tuple of positive semidefinite matrices to within an exponential multiplicative factor. To this end we extend the notion of doubly stochastic matrix scaling to a larger class of n -tuples of positive semidefinite matrices, and provide a polynomial-time algorithm for this scaling. As a corollary, we obtain a deterministic polynomial algorithm that computes the mixed volume of n convex bodies in R n to within an error which depends only on the dimension. This answers a question of Dyer, Gritzmann and Hufnagel. A ``side benefit' is a generalization of Rado's theorem on the existence of a linearly independent transversal.  相似文献   

16.
We develop a Las Vegas-randomized algorithm which performs interpolation of sparse multivariate polynomials over finite fields. Our algorithm can be viewed as the first successful adaptation of the sparse polynomial interpolation algorithm for the complex field developed by M. Ben-Or and P. Tiwari (1988, in “Proceedings of the 20th ACM Symposium on the Theory of Computing,” pp. 301–309, Assoc. Comput. Mach., New York) to the case of finite fields. It improves upon a previous result by D. Y. Grigoriev, M. Karpinski, and M. F. Singer (1990, SIAM J. Comput.19, 1059–1063) and is by far the most time efficient algorithm (time and processor efficient parallel algorithm) for the problem when the finite field is large. As applications, we obtain Monte Carlo-randomized parallel algorithms for sparse multivariate polynomial factorization and GCD over finite fields. The efficiency of these algorithms improves upon that of the previously known algorithms for the respective problems.  相似文献   

17.
This paper studies the complexity of the polynomial-time samplable (P-samplable) distributions, which can be approximated within an exponentially small factor by sampling algorithms in time polynomial in the length of their outputs. The paper shows that common assumptions in complexity theory yield the separation of polynomial-time samplable distributions from the polynomial-time computable distributions with respect to polynomial domination, average-polynomial domination, polynomial equivalence, and average-polynomial equivalence.  相似文献   

18.
It is proved that the classical algorithm for constructing Newton-Puiseux expansions for the roots of polynomials using the method of Newton polygons is of polynomial complexity in the notation length of the expansion coefficients. This result is used, in the case of a ground field of characteristic O, to construct polynomial-time algorithms for factoring polynomials over fields of formal power series, and for fundamental computational problems in the theory of algebraic curves, such as curve normalization.Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova Akademii Nauk SSSR, Vol. 176, pp. 127–150, 1989.  相似文献   

19.
B. Burgeth 《PAMM》2002,1(1):466-467
The fast and accurate evaluation of expected values involving the probabilistic β‐distributions poses severe problems to standard numerical integration schemes. An efficient algorithm to evaluate such integrals is presented based on approximation and analytical evaluation rather than numerical integration. Starting from an extension of the 2‐parameter family of β‐distributions a criterion is derived to assess the correctness of any integration scheme in the numerically demanding limiting cases of this family.  相似文献   

20.
For a bounded integer , we wish to color all edges of a graph G so that any two edges within distance have different colors. Such a coloring is called a distance-edge-coloring or an -edge-coloring of G. The distance-edge-coloring problem is to compute the minimum number of colors required for a distance-edge-coloring of a given graph G. A partial k-tree is a graph with tree-width bounded by a fixed constant k. We first present a polynomial-time exact algorithm to solve the problem for partial k-trees, and then give a polynomial-time 2-approximation algorithm for planar graphs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号