首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For various random constraint satisfaction problems there is a significant gap between the largest constraint density for which solutions exist and the largest density for which any polynomial time algorithm is known to find solutions. Examples of this phenomenon include random k‐SAT, random graph coloring, and a number of other random constraint satisfaction problems. To understand this gap, we study the structure of the solution space of random k‐SAT (i.e., the set of all satisfying assignments viewed as a subgraph of the Hamming cube). We prove that for densities well below the satisfiability threshold, the solution space decomposes into an exponential number of connected components and give quantitative bounds for the diameter, volume, and number.© 2010 Wiley Periodicals, Inc. Random Struct. Alg., 38, 251–268, 2011  相似文献   

2.
A combinatorial constraint satisfaction problem aims at expressing in unified terms a wide spectrum of problems in various branches of mathematics, computer science, and AI. The generalized satisfiability problem is NP-complete, but many of its restricted versions can be solved in a polynomial time. It is known that the computational complexity of a restricted constraint satisfaction problem depends only on a set of polymorphisms of relations which are admitted to be used in the problem. For the case where a set of such relations is invariant under some Mal’tsev operation, we show that the corresponding constraint satisfaction problem can be solved in a polynomial time. __________ Translated from Algebra i Logika, Vol. 45, No. 6, pp. 655–686, November–December, 2006.  相似文献   

3.
We show that almost surely the rank of the adjacency matrix of the Erd?s‐Rényi random graph G(n,p) equals the number of nonisolated vertices for any c ln n/np ≤ 1/2, where c is an arbitrary positive constant larger than 1/2. In particular, the adjacency matrix of the giant component (a.s.) has full rank in this range. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008  相似文献   

4.
We study two natural models of randomly generated constraint satisfaction problems. We determine how quickly the domain size must grow with n to ensure that these models are robust in the sense that they exhibit a non‐trivial threshold of satisfiability, and we determine the asymptotic order of that threshold. We also provide resolution complexity lower bounds for these models. One of our results immediately yields a theorem regarding homomorphisms between two random graphs. © 2006 Wiley Periodicals, Inc. Random Struct. Alg., 2006  相似文献   

5.
For a graph property P, the edit distance of a graph G from P, denoted EP(G), is the minimum number of edge modifications (additions or deletions) one needs to apply to G to turn it into a graph satisfying P. What is the furthest graph on n vertices from P and what is the largest possible edit distance from P? Denote this maximal distance by ed(n,P). This question is motivated by algorithmic edge‐modification problems, in which one wishes to find or approximate the value of EP(G) given an input graph G. A monotone graph property is closed under removal of edges and vertices. Trivially, for any monotone property, the largest edit distance is attained by a complete graph. We show that this is a simple instance of a much broader phenomenon. A hereditary graph property is closed under removal of vertices. We prove that for any hereditary graph property P, a random graph with an edge density that depends on P essentially achieves the maximal distance from P, that is: ed(n,P) = EP(G(n,p(P))) + o(n2) with high probability. The proofs combine several tools, including strengthened versions of the Szemerédi regularity lemma, properties of random graphs and probabilistic arguments. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008  相似文献   

6.
Fernando A. C. C. Fontes  Sofia O. Lopes 《PAMM》2007,7(1):1061701-1061702
For some optimal control problems with pathwise state constraints the standard versions of the necessary conditions of optimality are unable to provide useful information to select minimizers. There exist some literature on stronger forms of the maximum principle, the so-called nondegenerate necessary conditions, that can be informative for those problems. These conditions can be applied when certain constraint qualifications are satisfied. However, when the state constraints have higher index (i.e. their first derivative with respect to time does not depend on the control) these nondegenerate necessary conditions cannot be used. This happens because constraint qualifications assumptions are never satisfied for higher index state constraints. We note that control problems with higher index state constraints arise frequently in practice. An example is a common mechanical systems for which there is a constraint on the position (an obstacle in the path, for example) and the control acts as a second derivative of the position (a force or acceleration) which is a typical case. Here, we provide a nondegenerate form of the necessary conditions that can be applied to nonlinear problems with higher index state constraints. When addressing a problem with a state constraint of index k, the result described is applicable under a constraint qualification that involves the k -th derivative of the state constraint, corresponding to the first time when derivative depends explicitly on the control. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

7.
In this paper we consider the problem of finding large collections of vertices and edges satisfying particular separation properties in random regular graphs of degree r, for each fixed r ≥ 3. We prove both constructive lower bounds and combinatorial upper bounds on the maximal sizes of these sets. The lower bounds are proved by analyzing a class of algorithms that return feasible solutions for the given problems. The analysis uses the differential equation method proposed by Wormald [Lectures on Approximation and Randomized Algorithms, PWN, Wassaw, 1999, pp. 239–298]. The upper bounds are proved by direct combinatorial means. © 2007 Wiley Periodicals, Inc. Random Struct. Alg., 2008  相似文献   

8.
We consider random systems generated by two-sided compositions of random surface diffeomorphisms,together with an ergodic Borel probability measure μ.Let D(μω)be its dimension of the sample measure,then we prove a formula relating D(μω)to the entropy and Lyapunov exponents of the random system,where D(μω)is dimHμω,-/dinBμω,or-/dimBμω.  相似文献   

9.
Let ccl(G) denote the order of the largest complete minor in a graph G (also called the contraction clique number) and let Gn,p denote a random graph on n vertices with edge probability p. Bollobás, Catlin, and Erd?s (Eur J Combin 1 (1980), 195–199) asymptotically determined ccl(Gn,p) when p is a constant. ?uczak, Pittel and Wierman (Trans Am Math Soc 341 (1994) 721–748) gave bounds on ccl(Gn,p) when p is very close to 1/n, i.e. inside the phase transition. We show that for every ε > 0 there exists a constant C such that whenever C/n < p < 1 ‐ ε then asymptotically almost surely ccl(Gn,p) = (1 ± ε)n/ , where b := 1/(1 ‐ p). If p = C/n for a constant C > 1, then ccl(Gn,p) = Θ( ). This extends the results in (Bollobás, Catlin, and P. Erd?s, Eur J Combin 1 (1980), 195–199) and answers a question of Krivelevich and Sudakov (preprint, 2006). © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008  相似文献   

10.
We study a random system of cn linear equations over n variables in GF(2), where each equation contains exactly r variables; this is equivalent to r‐XORSAT. Previous work has established a clustering threshold, for this model: if for any constant then with high probability all solutions form a well‐connected cluster; whereas if , then with high probability the solutions partition into well‐connected, well‐separated clusters (with probability tending to 1 as ). This is part of a general clustering phenomenon which is hypothesized to arise in most of the commonly studied models of random constraint satisfaction problems, via sophisticated but mostly nonrigorous techniques from statistical physics. We extend that study to the range , and prove that the connectivity parameters of the r‐XORSAT clusters undergo a smooth transition around the clustering threshold.  相似文献   

11.
We study the cover time of a random walk on the largest component of the random graph Gn,p. We determine its value up to a factor 1 + o(1) whenever np = c > 1, c = O(lnn). In particular, we show that the cover time is not monotone for c = Θ(lnn). We also determine the cover time of the k‐cores, k ≥ 2. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008  相似文献   

12.
Suppose that the patients’ survival times.Y, are random variables following the semiparametric regression modelY = Xβ +g(T) + ε, where (X,T) is a radom vector taking values inR×[0,1],βis an unknown parameter,g (*) is an unknown smooth regression function andE is the random error with zero mean and variance σ2. It is assumed that (X,T) is independent of E. The estimators andg n (*) of P andg(*) are defined, respectively, when the observations are randomly censored on the right and the censoring distribution is unknown. Moreover, it is shown that is asymptotically normal andg n (*) is weak consistence with rateO p(n-1/3). Project supported by China Postdoctoral Science Foundation and the National Natural Science Foundation of China.  相似文献   

13.
We present an expected polynomial time algorithm to generate an unlabeled connected cubic planar graph uniformly at random. We first consider rooted connected cubic planar graphs, i.e., we count connected cubic planar graphs up to isomorphisms that fix a certain directed edge. Based on decompositions along the connectivity structure, we derive recurrence formulas for the exact number of rooted cubic planar graphs. This leads to rooted 3‐connected cubic planar graphs, which have a unique embedding on the sphere. Special care has to be taken for rooted graphs that have a sense‐reversing automorphism. Therefore we introduce the concept of colored networks, which stand in bijective correspondence to rooted 3‐connected cubic planar graphs with given symmetries. Colored networks can again be decomposed along the connectivity structure. For rooted 3‐connected cubic planar graphs embedded in the plane, we switch to the dual and count rooted triangulations. Since all these numbers can be evaluated in polynomial time using dynamic programming, rooted connected cubic planar graphs can be generated uniformly at random in polynomial time by inverting the decomposition along the connectivity structure. To generate connected cubic planar graphs without a root uniformly at random, we apply rejection sampling and obtain an expected polynomial time algorithm. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008  相似文献   

14.
The fundamentals of interval analysis on directed acyclic graphs (DAGs) for global optimization and constraint propagation have recently been proposed in Schichl and Neumaier (J. Global Optim. 33, 541–562, 2005). For representing numerical problems, the authors use DAGs whose nodes are subexpressions and whose directed edges are computational flows. Compared to tree-based representations [Benhamou et al. Proceedings of the International Conference on Logic Programming (ICLP’99), pp. 230–244. Las Cruces, USA (1999)], DAGs offer the essential advantage of more accurately handling the influence of subexpressions shared by several constraints on the overall system during propagation. In this paper we show how interval constraint propagation and search on DAGs can be made practical and efficient by: (1) flexibly choosing the nodes on which propagations must be performed, and (2) working with partial subgraphs of the initial DAG rather than with the entire graph. We propose a new interval constraint propagation technique which exploits the influence of subexpressions on all the constraints together rather than on individual constraints. We then show how the new propagation technique can be integrated into branch-and-prune search to solve numerical constraint satisfaction problems. This algorithm is able to outperform its obvious contenders, as shown by the experiments.  相似文献   

15.
The semantic constructions and results for definite programs do not extend when dealing with negation. The main problem is related to a well-known problem in the area of algebraic specification: if we fix a constraint domain as a given model, its free extension by means of a set of Horn clauses defining a set of new predicates is semicomputable. However, if the language of the extension is richer than Horn clauses its free extension (if it exists) is not necessarily semicomputable. In this paper we present a framework that allows us to deal with these problems in a novel way. This framework is based on two main ideas: a reformulation of the notion of constraint domain and a functorial presentation of our semantics. In particular, the semantics of a logic program P is defined in terms of three functors: that apply to constraint domains and provide the operational, the least fixpoint and the logical semantics of P, respectively. To be more concrete, the idea is that the application of to a specific constraint solver provides the operational semantics of P that uses this solver; the application of to a specific domain provides the least fixpoint of P over this domain; and, the application of to a theory of constraints provides the logic theory associated to P. In this context, we prove that these three functors are in some sense equivalent.   相似文献   

16.
The “classical” random graph models, in particular G(n,p), are “homogeneous,” in the sense that the degrees (for example) tend to be concentrated around a typical value. Many graphs arising in the real world do not have this property, having, for example, power‐law degree distributions. Thus there has been a lot of recent interest in defining and studying “inhomogeneous” random graph models. One of the most studied properties of these new models is their “robustness”, or, equivalently, the “phase transition” as an edge density parameter is varied. For G(n,p), p = c/n, the phase transition at c = 1 has been a central topic in the study of random graphs for well over 40 years. Many of the new inhomogeneous models are rather complicated; although there are exceptions, in most cases precise questions such as determining exactly the critical point of the phase transition are approachable only when there is independence between the edges. Fortunately, some models studied have this property already, and others can be approximated by models with independence. Here we introduce a very general model of an inhomogeneous random graph with (conditional) independence between the edges, which scales so that the number of edges is linear in the number of vertices. This scaling corresponds to the p = c/n scaling for G(n,p) used to study the phase transition; also, it seems to be a property of many large real‐world graphs. Our model includes as special cases many models previously studied. We show that, under one very weak assumption (that the expected number of edges is “what it should be”), many properties of the model can be determined, in particular the critical point of the phase transition, and the size of the giant component above the transition. We do this by relating our random graphs to branching processes, which are much easier to analyze. We also consider other properties of the model, showing, for example, that when there is a giant component, it is “stable”: for a typical random graph, no matter how we add or delete o(n) edges, the size of the giant component does not change by more than o(n). © 2007 Wiley Periodicals, Inc. Random Struct. Alg., 31, 3–122, 2007  相似文献   

17.
18.
Summary In this paper a new bivariate exponential distribution, arising naturally in the theory of Poisson line processes, is studied. The distribution has some interesting and useful properties which renders it suitable for use in statistical modelling work. It is presented in the spirit of adding to the repertoire of bivariate exponential forms. It joins other models, such as those of Downton (1970,J. R. Statist. Soc., B,32, 408–417), Marshall and Olkin (1967,J. Appl. Prob.,4, 291–302) and Nagao and Kadoya (1971,Bulletin of the Disaster Prevention Research Institute,20, 3, 183–215), which have their origins in the theory of stochastic processes.  相似文献   

19.
In this paper, the problem of minimizing a functionf(x) subject to a constraint (x)=0 is considered. Here,f is a scalar,x ann-vector, and aq-vector, withq<n. The use of the augmented penalty function is explored in connection with theordinary gradient algorithm. The augmented penalty functionW(x, ,k) is defined to be the linear combination of the augmented functionF(x, ) and the constraint errorP(x), where theq-vector is the Lagrange multiplier and the scalark is the penalty constant.The ordinary gradient algorithm is constructed in such a way that the following properties are satisfied in toto or in part: (a) descent property on the augmented penalty function, (b) descent property on the augmented function, (c) descent property on the constraint error, (d) constraint satisfaction on the average, or (e) individual constraint satisfaction. Properties (d) and (e) are employed to first order only.With the above considerations in mind, two classes of algorithms are developed. For algorithms of Class I, the multiplier is determined so that the error in the optimum condition is minimized for givenx; for algorithms of Class II, the multiplier is determined so that the constraint is satisfied to first order.Algorithms of Class I have properties (a), (b), (c) and include Algorithms (I-) and (I-). In the former algorithm, the penalty constant is held unchanged for all iterations; in the latter, the penalty constant is updated at each iteration so as to ensure satisfaction of property (d).Algorithms of Class II have properties (a), (c), (e) and include Algorithms (II-) and (II-). In the former algorithm, the penalty constant is held unchanged for all iterations; in the latter, the penalty constant is updated at each iteration so as to ensure satisfaction of property (b).Four numerical examples are presented. They show that algorithms of Class II exhibit faster convergence than algorithms of Class I and that algorithms of type () exhibit faster convergence than algorithms of type (). Therefore, Algorithm (II-) is the best among those analyzed. This is due to the fact that, in Algorithm (II-), individual constraint satisfaction is enforced and that descent properties hold for the augmented penalty function, the augmented function, and the constraint error.This research was supported by the National Science Foundation, Grant No. GP-27271. The authors are indebted to Dr. J. N. Damoulakis for analytical and numerical assistance. Discussions with Professor H. Y. Huang are acknowledged.  相似文献   

20.
By the breakthrough work of Håstad [J ACM 48(4) (2001), 798–859], several constraint satisfaction problems are now known to have the following approximation resistance property: Satisfying more clauses than what picking a random assignment would achieve is NP‐hard. This is the case for example for Max E3‐Sat, Max E3‐Lin, and Max E4‐Set Splitting. A notable exception to this extreme hardness is constraint satisfaction over two variables (2‐CSP); as a corollary of the celebrated Goemans‐Williamson algorithm [J ACM 42(6) (1995), 1115–1145], we know that every Boolean 2‐CSP has a nontrivial approximation algorithm whose performance ratio is better than that obtained by picking a random assignment to the variables. An intriguing question then is whether this is also the case for 2‐CSPs over larger, non‐Boolean domains. This question is still open, and is equivalent to whether the generalization of Max 2‐SAT to domains of size d, can be approximated to a factor better than (1 ? 1/d2). In an attempt to make progress towards this question, in this paper we prove, first, that a slight restriction of this problem, namely, a generalization of linear inequations with two variables per constraint, is not approximation resistant, and, second, that the Not‐All‐Equal Sat problem over domain size d with three variables per constraint, is approximation resistant, for every d ≥ 3. In the Boolean case, Not‐All‐Equal Sat with three variables per constraint is equivalent to Max 2‐SAT and thus has a nontrivial approximation algorithm; for larger domain sizes, Max 2‐SAT can be reduced to Not‐All‐Equal Sat with three variables per constraint. Our approximation algorithm implies that a wide class of 2‐CSPs called regular 2‐CSPs can all be approximated beyond their random assignment threshold. © 2004 Wiley Periodicals, Inc. Random Struct. Alg. 2004  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号