首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Journal of Complexity》1996,12(2):134-166
Sparse elimination exploits the structure of a multivariate polynomial by considering its Newton polytope instead of its total degree. We concentrate on polynomial systems that generate zero-dimensional ideals. A monomial basis for the coordinate ring is defined from a mixed subdivision of the Minkowski sum of the Newton polytopes. We offer a new simple proof relying on the construction of a sparse resultant matrix, which leads to the computation of a multiplication map and all common zeros. The size of the monomial basis equals the mixed volume and its computation is equivalent to computing the mixed volume, so the latter is a measure of intrinsic complexity. On the other hand, our algorithms have worst-case complexity proportional to the volume of the Minkowski sum. In order to derive bounds in terms of the sparsity parameters, we establish new bounds on the Minkowski sum volume as a function of mixed volume. To this end, we prove a lower bound on mixed volume in terms of Euclidean volume which is of independent interest.  相似文献   

2.
沈云付 《数学学报》2001,44(1):21-28
本文中我们将研究语言,上素数阶群理论T的量词消去及相应的复杂性.我们证明理论T有量词消去性质,并利用该性质给出理论T判定问题的一个复杂性上界.  相似文献   

3.
In this article we study generalized Nash equilibrium problems (GNEP) and bilevel optimization side by side. This perspective comes from the crucial fact that both problems heavily depend on parametric issues. Observing the intrinsic complexity of GNEP and bilevel optimization, we emphasize that it originates from unavoidable degeneracies occurring in parametric optimization. Under intrinsic complexity, we understand the involved geometrical complexity of Nash equilibria and bilevel feasible sets, such as the appearance of kinks and boundary points, non-closedness, discontinuity and bifurcation effects. The main goal is to illustrate the complexity of those problems originating from parametric optimization and singularity theory. By taking the study of singularities in parametric optimization into account, the structural analysis of Nash equilibria and bilevel feasible sets is performed. For GNEPs, the number of players’ common constraints becomes crucial. In fact, for GNEPs without common constraints and for classical NEPs we show that—generically—all Nash equilibria are jointly nondegenerate Karush–Kuhn–Tucker points. Consequently, they are isolated. However, in presence of common constraints Nash equilibria will constitute a higher dimensional set. In bilevel optimization, we describe the global structure of the bilevel feasible set in case of a one-dimensional leader’s variable. We point out that the typical discontinuities of the leader’s objective function will be caused by follower’s singularities. The latter phenomenon occurs independently of the viewpoint of the optimistic or pessimistic approach. In case of higher dimensions, optimistic and pessimistic approaches are discussed with respect to possible bifurcation of the follower’s solutions.  相似文献   

4.
Relations between discrete and continuous complexity models are considered. The present paper is devoted to combine both models. In particular we analyze the 3-Satisfiability problem. The existence of fast decision procedures for this problem over the reals is examined based on certain conditions on the discrete setting. Moreover we study the behaviour of exponential time computations over the reals depending on the real complexity of 3-Satisfiability. This will be done using tools from complexity theory over the integers.  相似文献   

5.
完全二叉树理论的计算复杂度   总被引:2,自引:2,他引:0  
李志敏  罗里波  李祥 《数学学报》2008,51(2):311-318
完全二叉树的一阶理论已被证明具有量词消去的性质,进而计算了完全二叉树模型中元素的CB秩.本文利用有界Ehrenfeucht-Frassé博弈研究完全二叉树的一阶理论,证明了此理论的时间计算复杂度上界为22cn,空间计算复杂度上界为2dn(其中n为输入长度,c,d为合适的常数).  相似文献   

6.
ON A PROJECTION THEOREM OF QUASIVARIETIES IN ELIMINATION THEORY   总被引:4,自引:0,他引:4  
It is proyed that the quasi-varieties in affine space is closed under the projection operation though it is not so for algebraic varieties.  相似文献   

7.
8.
We show that if a language has an interactive proof of logarithmic statistical knowledge-complexity, then it belongs to the class . Thus, if the polynomial time hierarchy does not collapse, then -complete languages do not have logarithmic knowledge complexity. Prior to this work, there was no indication that would contradict languages being proven with even one bit of knowledge. Our result is a common generalization of two previous results: The first asserts that statistical zero knowledge is contained in [11, 2], while the second asserts that the languages recognizable in logarithmic statistical knowledge complexity are in [19]. Next, we consider the relation between the error probability and the knowledge complexity of an interactive proof. Note that reducing the error probability via repetition is not free: it may increase the knowledge complexity. We show that if the negligible error probability is less than (where k(n) is the knowledge complexity) then the language proven is in the third level of the polynomial time hierarchy (specifically, it is in ). In the standard setting of negligible error probability, there exist PSPACE-complete languages which have sub-linear knowledge complexity. However, if we insist, for example, that the error probability is less than , then PSPACE-complete languages do not have sub-quadratic knowledge complexity, unless . In order to prove our main result, we develop an AM protocol for checking that a samplable distribution D has a given entropy h. For any fractions , the verifier runs in time polynomial in and and fails with probability at most to detect an additive error in the entropy. We believe that this protocol is of independent interest. Subsequent to our work Goldreich and Vadhan [16] established that the problem of comparing the entropies of two samplable distributions if they are noticeably different is a natural complete promise problem for the class of statistical zero knowledge (). Received January 6, 2000 RID=" " ID=" " This research was performed while the authors were visiting the Computer Science Department at the University of Toronto, preliminary version of this paper appeared in [27] RID="*" ID="*" Partially supported by Technion V. P. R. Found––N. Haar and R. Zinn Research Fund. RID="†" ID="†" Partially supported by the grants OTKA T-029255, T-030059, FKFP 0607/1999, and AKP 2000-78 2.1.  相似文献   

9.
10.
《Journal of Complexity》2002,18(1):242-286
Commencing with a brief survey of Lie-group theory and differential equations evolving on Lie groups, we describe a number of numerical algorithms designed to respect Lie-group structure: Runge–Kutta–Munthe-Kaas schemes, Fer and Magnus expansions. This is followed by derivation of the computational cost of Fer and Magnus expansions, whose conclusion is that for order four, six, and eight an appropriately discretized Magnus method is always cheaper than a Fer method of the same order. Each Lie-group method of the kind surveyed in this paper requires the computation of a matrix exponential. Classical methods, e.g., Krylov-subspace and rational approximants, may fail to map elements in a Lie algebra to a Lie group. Therefore we survey a number of approximants based on the splitting approach and demonstrate that their cost is compatible (and often superior) to classical methods.  相似文献   

11.
We show that the feasibility of a system of m linear inequalities over the cone of symmetric positive semidefinite matrices of order n can be tested in mn arithmetic operations with -bit numbers, where l is the maximum binary size of the input coefficients. We also show that any feasible system of dimension (m,n) has a solution X such that log||X|| .  相似文献   

12.
We study two approaches to replace a finite mathematical programming problem with inequality constraints by a problem that contains only equality constraints. The first approach lifts the feasible set into a high-dimensional space by the introduction of quadratic slack variables. We show that then not only the number of critical points but also the topological complexity of the feasible set grow exponentially. On the other hand, the second approach bases on an interior point technique and lifts an approximation of the feasible set into a space with only one additional dimension. Here only Karush–Kuhn–Tucker points with respect to the positive and negative objective function in the original problem give rise to critical points of the smoothed problem, so that the number of critical points as well as the topological complexity can at most double.  相似文献   

13.
For a real square-free multivariate polynomial F, we treat the general problem of finding real solutions of the equation F=0, provided that the real solution set {F=0} is compact. We allow that the equation F=0 may have singular real solutions. We are going to decide whether this equation has a non-singular real solution and, if this is the case, we exhibit one for each generically smooth connected component of {F=0}. We design a family of elimination algorithms of intrinsic complexity which solves this problem. In the worst case, the complexity of our algorithms does not exceed the already known extrinsic complexity bound of (nd) O(n) for the elimination problem under consideration, where n is the number of indeterminates of F and d its (positive) degree. In the case that the real variety defined by F is smooth, there already exist algorithms of intrinsic complexity that solve our problem. However, these algorithms cannot be used in case when F=0 admits F-singular real solutions.  相似文献   

14.
15.
Given a nonconvex simple polygon $P$ with $n$ vertices, is it possible to construct a data structure which after preprocessing can answer halfspace area queries (i.e., given a line, determine the area of the portion of the polygon above the line) in $o(n)$ time? We answer negatively, proving an $\Omega(n)$ lower bound on the query time of any data structure performing this task. We then consider the offline version of the same problem: given a polygon $P$ with $n$ vertices, and $k$ query lines, we present an algorithm that computes the area of $P$ on both sides of each line in $O(k^{2/3}n^{2/3+\varepsilon}+(n+k)\polylog{n})$ time. Variants of this method allow the query of a collection of weighted polygons with or without holes, and solve several other related problems within the same time bounds.  相似文献   

16.
17.
Felsner  Stefan  Kant  Ravi  Rangan  C. Pandu  Wagner  Dorothea 《Order》2000,17(2):179-193
The recognition complexity of ordered set properties is considered in terms of how many questions must be put to an adversary to decide if an unknown partial order has the prescribed property. We prove a lower bound of order n 2 for properties that are characterized by forbidden substructures of fixed size. For the properties being connected, and having exactly k comparable pairs, k n 2 / 4 we show that the recognition complexity is (n\choose 2). The complexity of interval orders is exactly (n\choose 2) - 1. We further establish bounds for being a lattice, being of height k and having width k.  相似文献   

18.
We consider the cost of estimating an error bound for the computed solution of a system of linear equations, i.e., estimating the norm of a matrix inverse. Under some technical assumptions we show that computing even a coarse error bound for the solution of a triangular system of equations costs at least as much as testing whether the product of two matrices is zero. The complexity of the latter problem is in turn conjectured to be the same as matrix multiplication, matrix inversion, etc. Since most error bounds in practical use have much lower complexity, this means they should sometimes exhibit large errors. In particular, it is shown that condition estimators that: (1) perform at least one operation on each matrix entry; and (2) are asymptotically faster than any zero tester, must sometimes over or underestimate the inverse norm by a factor of at least , where n is the dimension of the input matrix, k is the bitsize, and where either or grows faster than any polynomial in n . Our results hold for the RAM model with bit complexity, as well as computations over rational and algebraic numbers, but not real or complex numbers. Our results also extend to estimating error bounds or condition numbers for other linear algebra problems such as computing eigenvectors. September 10, 1999. Final version received: August 23, 2000.  相似文献   

19.
《Journal of Complexity》1993,9(4):499-517
The notion of Kolmogorov program-size complexity (or algorithmic information) is defined here for arbitrary objects. Using a special form of recursive topological spaces, called partition spaces, we define a recursive topology which uses a level of partition for approximation of arbitrary objects instead of the usual metric. It is shown that the formulation for arbitrary objects satisfies most of the previous results obtained usually for natural numbers and for sequences of symbols. Thus we claim the existence of abstract computers formalizes the idea that many real-life objects may, in fact, be calculated (or approximated) effectively. We also show the existence of a universal probability measure for our arbitrary objects.  相似文献   

20.
介绍联系拟合优度与模型复杂性测度的一种模型选择准则一信息复杂性(ICOMP)准则的基本原理.由Bozdogan提出的信息复杂性(ICOMP)准则可以视为两个Kullback-Leibler距离之和的一种近似.首先研究了所考虑模型中有真实模型的情况下,ICOMP准则类的渐近相容性;然后又介绍并完成了所考虑模型中没有真实模型的情况下,ICOMP准则类的渐近相容性.在有限样本容量的情况下,用ICOMP准则选择的估计模型,比用其他通用的准则选择的估计模型,更接近于真实模型.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号