共查询到20条相似文献,搜索用时 31 毫秒
1.
Karmarkar, Karp, Lipton, Lovász, and Luby proposed a Monte Carlo algorithm for approximating the permanent of a non-negativen×n matrix, which is based on an easily computed, unbiased estimator. It is not difficult to construct 0,1-matrices for which
the variance of this estimator is very large, so that an exponential number of trials is necessary to obtain a reliable approximation
that is within a constant factor of the correct value.
Nevertheless, the same authors conjectured that for a random 0,1-matrix the variance of the estimator is typically small.
The conjecture is shown to be true; indeed, for almost every 0,1-matrixA, just O(nw(n)e
-2) trials suffice to obtain a reliable approximation to the permanent ofA within a factor 1±ɛ of the correct value. Here ω(n) is any function tending to infinity asn→∞. This result extends to random 0,1-matrices with density at leastn
−1/2ω(n).
It is also shown that polynomially many trials suffice to approximate the permanent of any dense 0,1-matrix, i.e., one in
which every row- and column-sum is at least (1/2+α)n, for some constant α>0. The degree of the polynomial bounding the number of trials is a function of α, and increases as α→0.
Supported by NSF grant CCR-9225008.
The work described here was partly carried out while the author was visiting Princeton University as a guest of DIMACS (Center
for Discrete Mathematics and Computer Science). 相似文献
2.
We formulate the notion of a "good approximation" to a probability distribution over a finite abelian group ?. The quality
of the approximating distribution is characterized by a parameter ɛ which is a bound on the difference between corresponding
Fourier coefficients of the two distributions. It is also required that the sample space of the approximating distribution
be of size polynomial in and 1/ɛ. Such approximations are useful in reducing or eliminating the use of randomness in certain randomized algorithms.
We demonstrate the existence of such good approximations to arbitrary distributions. In the case of n random variables distributed uniformly and independently over the range , we provide an efficient construction of a good approximation. The approximation constructed has the property that any linear
combination of the random variables (modulo d) has essentially the same behavior under the approximating distribution as it does under the uniform distribution over . Our analysis is based on Weil's character sum estimates. We apply this result to the construction of a non-binary linear
code where the alphabet symbols appear almost uniformly in each non-zero code-word.
Received: September 22, 1990/Revised: First revision November 11, 1990; last revision November 10, 1997 相似文献
3.
To investigate the quality of heavy-traffic approximations for queues with many servers, we consider the steady-state number
of waiting customers in an M/D/s queue as s→∞. In the Halfin-Whitt regime, it is well known that this random variable converges to the supremum of a Gaussian random
walk. This paper develops methods that yield more accurate results in terms of series expansions and inequalities for the
probability of an empty queue, and the mean and variance of the queue length distribution. This quantifies the relationship
between the limiting system and the queue with a small or moderate number of servers. The main idea is to view the M/D/s queue through the prism of the Gaussian random walk: as for the standard Gaussian random walk, we provide scalable series
expansions involving terms that include the Riemann zeta function.
相似文献
4.
Quick Approximation to Matrices and Applications 总被引:1,自引:0,他引:1
m ×n matrix A with entries between say −1 and 1, and an error parameter ε between 0 and 1, we find a matrix D (implicitly) which is the sum of simple rank 1 matrices so that the sum of entries of any submatrix (among the ) of (A−D) is at most εmn in absolute value. Our algorithm takes time dependent only on ε and the allowed probability of failure (not on m, n).
We draw on two lines of research to develop the algorithms: one is built around the fundamental Regularity Lemma of Szemerédi
in Graph Theory and the constructive version of Alon, Duke, Leffman, R?dl and Yuster. The second one is from the papers of
Arora, Karger and Karpinski, Fernandez de la Vega and most directly Goldwasser, Goldreich and Ron who develop approximation
algorithms for a set of graph problems, typical of which is the maximum cut problem.
From our matrix approximation, the above graph algorithms and the Regularity Lemma and several other results follow in a simple
way.
We generalize our approximations to multi-dimensional arrays and from that derive approximation algorithms for all dense Max-SNP
problems.
Received: July 25, 1997 相似文献
5.
Oded Goldreich Shafi Goldwasser Eric Lehman Dana Ron Alex Samorodnitsky 《Combinatorica》2000,20(3):301-337
at arguments of its choice, the test always accepts a monotone f, and rejects f with high probability if it is ε-far from being monotone (i.e., every monotone function differs from f on more than an ε fraction of the domain). The complexity of the test is O(n/ε).
The analysis of our algorithm relates two natural combinatorial quantities that can be measured with respect to a Boolean
function; one being global to the function and the other being local to it. A key ingredient is the use of a switching (or sorting) operator on functions.
Received March 29, 1999 相似文献
6.
We show that Closest Substring, one of the most important problems in the field of consensus string analysis, is W[1]-hard when parameterized by the number
k of input strings (and remains so, even over a binary alphabet). This is done by giving a “strongly structure-preserving”
reduction from the graph problem Clique to Closest Substring. This problem is therefore unlikely to be solvable in time O(f(k)•nc) for any function f of k and constant c independent of k, i.e., the combinatorial explosion seemingly inherent to this NP-hard problem cannot be restricted to parameter k. The problem can therefore be expected to be intractable, in any practical sense, for k ≥ 3. Our result supports the intuition that Closest Substring is computationally much harder than the special case of Closest String, althoughb othp roblems are NP-complete. We also prove W[1]-hardness for other parameterizations in the case of unbounded
alphabet size. Our W[1]-hardness result for Closest Substring generalizes to Consensus Patterns, a problem arising in computational biology.
* An extended abstract of this paper was presented at the 19th International Symposium on Theoretical Aspects of Computer
Science (STACS 2002), Springer-Verlag, LNCS 2285, pages 262–273, held in Juan-Les-Pins, France, March 14–16, 2002.
† Work was supported by the Deutsche Forschungsgemeinschaft (DFG), research project “OPAL” (optimal solutions for hard problems
in computational biology), NI 369/2.
‡ Work was done while the author was with Wilhelm-Schickard-Institut für Informatik, Universit?t Tübingen. Work was partially
supported by the Deutsche Forschungsgemeinschaft (DFG), Emmy Noether research group “PIAF” (fixed-parameter algorithms), NI
369/4. 相似文献
7.
We propose to approximate the conditional density function of a random variable Y given a dependent random d-vector X by that of Y given θ^τX, where the unit vector θ is selected such that the average Kullback-Leibler discrepancy distance between the two conditional density functions obtains the minimum. Our approach is nonparametric as far as the estimation of the conditional density functions is concerned. We have shown that this nonparametric estimator is asymptotically adaptive to the unknown index θ in the sense that the first order asymptotic mean squared error of the estimator is the same as that when θ was known. The proposed method is illustrated using both simulated and real-data examples. 相似文献
8.
Let
be finite relational structure of finite type, and let CSP
denote the following decision problem: if
is a given structure of the same type as
, is there a homomorphism from
to
? To each relational structure
is associated naturally an algebra
whose structure determines the complexity of the associated decision problem. We investigate those finite algebras arising
from CSP’s of so-called bounded width, i.e., for which local consistency algorithms effectively decide the problem. We show that if a CSP has bounded width then
the variety generated by the associated algebra omits the Hobby-McKenzie types 1 and 2. This provides a method to prove that
certain CSP’s do not have bounded width. We give several applications, answering a question of Nešetřil and Zhu [26], by showing
that various graph homomorphism problems do not have bounded width. Feder and Vardi [17] have shown that every CSP is polynomial-time
equivalent to the retraction problem for a poset we call the Feder − Vardi poset of the structure. We show that, in the case where the structure has a single relation, if the retraction problem for the
Feder-Vardi poset has bounded width then the CSP for the structure also has bounded width. This is used to exhibit a finite
order-primal algebra whose variety admits type 2 but omits type 1 (provided P ≠ NP).
Presented by M. Valeriote.
Received January 8, 2005; accepted in final form April 3, 2006.
The first author’s research is supported by a grant from NSERC and the Centre de Recherches Mathématiques. The second author’s
research is supported by OTKA no. 034175 and 48809 and T 037877. Part of this research was conducted while the second author
was visiting Concordia University in Montréal and also when the first author was visiting the Bolyai Institute in Szeged.
The support of NSERC, OTKA and the Bolyai Institute is gratefully acknowledged. 相似文献
9.
Inversion over a finite field is usually an expensive operation which is a crucial issue in many applications, such as cryptography and error-control codes. In this paper, three different strategies for computing the inverse over binary finite fields , called Eulerian, Gaussian, and Euclidean, respectively, are discussed and compared in terms of time and space complexity. In particular, some new upper and lower bounds to the respective complexities are evaluated. 相似文献
10.
We show that if a language has an interactive proof of logarithmic statistical knowledge-complexity, then it belongs to the
class . Thus, if the polynomial time hierarchy does not collapse, then -complete languages do not have logarithmic knowledge complexity. Prior to this work, there was no indication that would contradict
languages being proven with even one bit of knowledge. Our result is a common generalization of two previous results: The
first asserts that statistical zero knowledge is contained in [11, 2], while the second asserts that the languages recognizable in logarithmic statistical knowledge complexity are in
[19].
Next, we consider the relation between the error probability and the knowledge complexity of an interactive proof. Note that
reducing the error probability via repetition is not free: it may increase the knowledge complexity. We show that if the negligible
error probability is less than (where k(n) is the knowledge complexity) then the language proven is in the third level of the polynomial time hierarchy (specifically,
it is in ). In the standard setting of negligible error probability, there exist PSPACE-complete languages which have sub-linear knowledge
complexity. However, if we insist, for example, that the error probability is less than , then PSPACE-complete languages do not have sub-quadratic knowledge complexity, unless .
In order to prove our main result, we develop an AM protocol for checking that a samplable distribution D has a given entropy h. For any fractions , the verifier runs in time polynomial in and and fails with probability at most to detect an additive error in the entropy. We believe that this protocol is of independent interest. Subsequent to our work Goldreich and Vadhan [16]
established that the problem of comparing the entropies of two samplable distributions if they are noticeably different is
a natural complete promise problem for the class of statistical zero knowledge ().
Received January 6, 2000
RID=" "
ID=" " This research was performed while the authors were visiting the Computer Science Department at the University of Toronto,
preliminary version of this paper appeared in [27]
RID="*"
ID="*" Partially supported by Technion V. P. R. Found––N. Haar and R. Zinn Research Fund.
RID="†"
ID="†" Partially supported by the grants OTKA T-029255, T-030059, FKFP 0607/1999, and AKP 2000-78 2.1. 相似文献
11.
George Davie 《Archive for Mathematical Logic》2001,40(8):629-638
Let ω be a Kolmogorov–Chaitin random sequence with ω1:
n
denoting the first n digits of ω. Let P be a recursive predicate defined on all finite binary strings such that the Lebesgue measure of the set {ω|∃nP(ω1:
n
)} is a computable real α. Roughly, P holds with computable probability for a random infinite sequence. Then there is an algorithm which on input indices for any
such P and α finds an n such that P holds within the first n digits of ω or not in ω at all. We apply the result to the halting probability Ω and show that various generalizations of
the result fail.
Received: 1 December 1998 / Published online: 3 October 2001 相似文献
12.
In the cake cutting problem, n≥2 players want to cut a cake into n pieces so that every player gets a ‘fair’ share of the cake by his own measure.
We prove the following result: For every ε>0, there exists a cake division scheme for n players that uses at most cεn cuts, and in which each player can enforce to get a share of at least (1-ε)/n of the cake according to his own private measure.
* Partially supported by Institute for Theoretical Computer Science, Prague (project LN00A056 of MŠMT ČR) and grant IAA1019401
of GA AV ČR. 相似文献
13.
In this paper we consider the worst case ratio between the capacity of min-cuts and the value of max-flow for multicommodity flow problems. We improve the best known bounds for the min-cut max-flow ratio for multicommodity flows in undirected graphs, by replacing theO(logD) in the bound byO(logk), whereD denotes the sum of all demands, andk demotes the number of commodities. In essence we prove that up to constant factors the worst min-cut max-flow ratios appear in problems where demands are integral and polynomial in the number of commodities.Klein, Rao, Agrawal, and Ravi have previously proved that if the demands and the capacities are integral, then the min-cut max-flow ratio in general undirected graphs is bounded byO(logClogD), whereC denotes the sum of all the capacities. Tragoudas has improved this bound toO(lognlogD), wheren is the number of nodes in the network. Garg, Vazirani and Yannakakis further improved this toO(logklogD). Klein, Plotkin and Rao have proved that for planar networks, the ratio isO(logD).Our result improves the bound for general networks toO(log2
k) and the bound for planar networks toO(logk). In both cases our result implies the first non-trivial bound that is independent of the magnitude of the numbers involved. The method presented in this paper can be used to give polynomial time approximation algorithms to the minimum cuts in the network up to the above factors.Preliminary version appeared in Proceedings of the 25th Annual ACM Symposium on the Theory of Computing, 1993, 691-697.Research supported by U.S. Army Research Office Grant DAAL-03-91-G-0102, and by a grant from Mitsubishi Electric Laboratories.Research supported in part by a Packard Fellowship, an NSF PYI award, a Sloan Fellowship, and by the National Science Foundation, the Air Force Office of Scientific Research, and the Office of Naval Research, through NSF grant DMS-8920550. 相似文献
14.
In this paper we present approximation algorithms based on a Lagrangian decomposition via a logarithmic potential reduction
to solve a general packing or min–max resource sharing problem with M non-negative convex constraints on a convex set B. We generalize a method by Grigoriadis et al. to the case with weak approximate block solvers (i.e., with only constant,
logarithmic or even worse approximation ratios). Given an accuracy , we show that our algorithm needs calls to the block solver, a bound independent of the data and the approximation ratio of the block solver. For small approximation
ratios the algorithm needs calls to the block solver. As an application we study the problem of minimizing the maximum edge congestion in a multicast
communication network. Interestingly the block problem here is the classical Steiner tree problem that can be solved only
approximately. We show how to use approximation algorithms for the Steiner tree problem to solve the multicast congestion
problem approximately.
This work was done in part when the second author was studying at the University of Kiel.
This paper combines our extended abstracts of the 2nd IFIP International Conference on Theoretical Computer Science, TCS 2002, Montréal, Canada and the 3rd Workshop on Approximation and Randomization Algorithms in Communication Networks, ARACNE 2002, Roma, Italy. This research was supported in part by the DFG - Graduiertenkolleg, Effiziente Algorithmen und
Mehrskalenmethoden; by the EU Thematic Network APPOL I + II, Approximation and Online Algorithms, IST-1999-14084 and IST-2001-32007;
by the EU Research Training Network ARACNE, Approximation and Randomized Algorithms in Communication Networks, HPRN-CT-1999-00112;
by the EU Project CRESCCO, Critical Resource Sharing for Cooperation in Complex Systems, IST-2001-33135. The second author
was also supported by an MITACS grant of Canada; and by the NSERC Discovery Grant DG 5-48923. 相似文献
15.
, for the monotone depth of functions in monotone-P. As a result we achieve the separation of the following classes.
1. monotone-NC ≠ monotone-P.
2. For every i≥1, monotone-≠ monotone-.
3. More generally: For any integer function D(n), up to (for some ε>0), we give an explicit example of a monotone Boolean function, that can be computed by polynomial size monotone
Boolean circuits of depth D(n), but that cannot be computed by any (fan-in 2) monotone Boolean circuits of depth less than Const·D(n) (for some constant Const).
Only a separation of monotone- from monotone- was previously known.
Our argument is more general: we define a new class of communication complexity search problems, referred to below as DART
games, and we prove a tight lower bound for the communication complexity of every member of this class. As a result we get
lower bounds for the monotone depth of many functions. In particular, we get the following bounds:
1. For st-connectivity, we get a tight lower bound of . That is, we get a new proof for Karchmer–Wigderson's theorem, as an immediate corollary of our general result.
2. For the k-clique function, with , we get a tight lower bound of Ω(k log n). This lower bound was previously known for k≤ log n [1]. For larger k, however, only a bound of Ω(k) was previously known.
Received: December 19, 1997 相似文献
16.
d , and the testing algorithm can perform queries of the form: ``who is the ith neighbor of vertex v'. The tester should determine with high probability whether the graph is bipartite or ε-far from bipartite for any given
distance parameter ε. Distance between graphs is defined to be the fraction of entries on which the graphs differ in their
incidence-lists representation. Our testing algorithm has query complexity and running time where N is the number of graph vertices. It was shown before that queries are necessary (for constant ε), and hence the performance of our algorithm is tight (in its dependence on N), up to polylogarithmic factors.
In our analysis we use techniques that were previously applied to prove fast convergence of random walks on expander graphs.
Here we use the contrapositive statement by which slow convergence implies small cuts in the graph, and further show that
these cuts have certain additional properties. This implication is applied in showing that for any graph, the graph vertices
can be divided into disjoint subsets such that: (1) the total number of edges between the different subsets is small; and
(2) each subset itself exhibits a certain mixing property that is useful in our analysis.
Received: February 6, 1998 相似文献
17.
Magnús M. Halldórsson Guy Kortsarz Jaikumar Radhakrishnan Sivaramakrishnan Sivasubramanian 《Combinatorica》2007,27(5):519-550
A complete partition of a graph G is a partition of its vertex set in which any two distinct classes are connected by an edge. Let cp(G) denote the maximum number of classes in a complete partition of G. This measure was defined in 1969 by Gupta [19], and is known to be NP-hard to compute for several classes of graphs. We
obtain essentially tight lower and upper bounds on the approximability of this problem. We show that there is a randomized
polynomial-time algorithm that given a graph G with n vertices, produces a complete partition of size Ω(cp(G)/√lgn). This algorithm can be derandomized.
We show that the upper bound is essentially tight: there is a constant C > 1, such that if there is a randomized polynomial-time algorithm that for all large n, when given a graph G with n vertices produces a complete partition into at least C·cp(G)/√lgn classes, then NP ⊆ RTime(n
O(lg lg n)). The problem of finding a complete partition of a graph is thus the first natural problem whose approximation threshold
has been determined to be of the form Θ((lgn)
c
) for some constant c strictly between 0 and 1.
The work reported here is a merger of the results reported in [30] and [21]. 相似文献
18.
Density Estimation with Replicate Heteroscedastic Measurements 总被引:1,自引:0,他引:1
We present a deconvolution estimator for the density function of a random variable from a set of independent replicate measurements.
We assume that measurements are made with normally distributed errors having unknown and possibly heterogeneous variances.
The estimator generalizes well-known deconvoluting kernel density estimators, with error variances estimated from the replicate
observations. We derive expressions for the integrated mean squared error and examine its rate of convergence as n → ∞ and the number of replicates is fixed. We investigate the finite-sample performance of the estimator through a simulation
study and an application to real data. 相似文献
19.
In this article, we study a model of a single variable sampling plan with Type I censoring. Assume that the quality of an
item in a batch is measured by a random variable which follows a Weibull distributionW (λ,m), with scale parameter λ and shape parameterm having a gamma-discrete prior distribution or σ=1/λ andm having an inverse gamma-uniform prior distribution. The decision function is based on the Kaplan-Meier estimator. Then, the
explicit expressions of the Bayes risk are derived. In addition, an algorithm is suggested so that an optimal sampling plan
can be determined approximately after a finite number of searching steps. 相似文献
20.
We derive recursions for the probability distribution of random sums by computer algebra. Unlike the well-known Panjer-type recursions, they are of finite order and thus allow for computation in linear time. This efficiency is bought by the assumption that the probability generating function of the claim size be algebraic. The probability generating function of the claim number is supposed to be from the rather general class of D-finite functions. 相似文献