首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Problem 5.1 in (L. Blum, M. Shub, and S. Smale, 1989, Bull. Amer. Math. Soc. 21(1), 1–46) asks if there is a decision problem that cannot be solved in polynomial time by a Turing machine, but can be solved in polynomial time on a unit-cost algebraic RAM with operations {+,-,*,/,<}, and without the integer division operation. We present a problem that is not known to be solvable in polynomial time on a Turing machine, but can be solved in polynomial time on a unit-cost algebraic RAM. This is strong evidence for an affirmative answer to Problem 5.1.  相似文献   

3.
Computing a zero-cluster of a polynomial sufficiently accurately within the available precision of computation has been an important issue from time immemorial. All the deterministic numerical methods so far known to us produce varying degree of errors. Often the errors are so dominant that the distinction between two zeros in the cluster becomes meaningfully difficult. Multiple zeros on the other hand can be more easily tackled and do not pose any serious computational problem. We discuss here the limits of both deterministic and randomized methods for zero-clusters and propose a simple exhaustive search algorithm that would obtain the zeros in a real/complex zero-cluster in a reasonable time. We present the computational error and computational/time complexity of this algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that no other algorithm can perform better than the proposed algorithm in an ultra-high speed computing environment for most real-world problems.  相似文献   

4.
We discuss polynomial interpolation in several variables from a polynomial ideal point of view. One of the results states that if I is a real polynomial ideal with real variety and if its codimension is equal to the cardinality of its variety, then for each monomial order there is a unique polynomial that interpolates on the points in the variety. The result is motivated by the problem of constructing cubature formulae, and it leads to a theorem on cubature formulae which can be considered an extension of Gaussian quadrature formulae to several variables. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

5.
In this paper we introduce the parametric minquantile problem, a weighted generalisation ofkth maximum minimisation. It is shown that, under suitable quasiconvexity assumptions, its resolution can be reduced to solving a polynomial number of minmax problems.It is also shown how this simultaneously solves (parametric) maximal covering problems. It follows that bicriteria problems, where the aim is to both maximize the covering and minimize the cover-level, are reducible to a discrete problem, on which any multiple criteria method may be applied.Corresponding author.Visiting researcher at the Center for Industrial Location of the Vrije Universiteit Brussel during this research.  相似文献   

6.
We study general geometric techniques for bounding the spectral gap of a reversible Markov chain. We show that the best bound obtainable using these techniques can be computed in polynomial time via semidefinite programming, and is off by at most a factor of order log2n, where n is the number of states. © 1997 John Wiley & Sons, Inc. Random Struct. Alg., 11 , 299–313 (1997)  相似文献   

7.
We have two polynomial time results for the uniform word problem for a quasivariety Q: (a) The uniform word problem for Q can be solved in polynomial time iff one can find a certain congruence on finite partial algebras in polynomial time. (b) Let Q* be the relational class determined by Q. If any universal Horn class between the universal closure S(Q*) and the weak embedding closure S?(Q*) of Q* is finitely axiomatizable then the uniform word problem for Q is solvable in polynomial time. This covers Skolem's 1920 solution to the uniform word problem for lattices and Evans' 1953 applications of the weak embeddability property for finite partial V algebras.  相似文献   

8.
Two-dimensional integral equations of convolution type can be solved by the spectral method in the Chebyshev-Laguerre polynomial basis. We construct an exact parametric representation of the generalized Laguerre spectrum of the required solution with respect to the known Laguerre spectrum of the right-hand side and the kernel of the equation.Translated fromMatematicheskie Metody i Fiziko-Mekhanicheskie Polya, Issue 36, 1992, pp. 84–88.  相似文献   

9.
We define a special kind of a probabilistically checkable proof system, namely, probabilistically checkable proof calculuses (PCP calculuses). A proof in such a calculus can be verified with sufficient confidence by examining only one random path in the proof tree, without reading the whole proof. The verification procedure just checks all applications of inference rules along the path; its running time is assumed to be polynomial in the theorem length. It is shown that the deductive power of PCP calculuses is characterized as follows: (i) the decision problem for theorems is in PSPACE for all PCP calculuses; and (ii) the mentioned problem is PSPACE-hard for some of the calculuses. Bibliography: 14 titles. Translated fromZapiski Nauchnykh Seminarov POMI, Vol. 241, 1997, pp. 97–116 This research was supported in part by the Russian Foundation for Basic Research. Translated by E. Ya. Dintsin.  相似文献   

10.
Given a graph G on n nodes, let denote the cone consisting of the positive semidefinite matrices (with real or complex entries) having a zero entry at every off-diagonal position corresponding to a non edge of G. Then, the sparsity order of G is defined as the maximum rank of a matrix lying on an extreme ray of the cone . It is known that the graphs with sparsity order 1 are the chordal graphs and a characterization of the graphs with sparsity order 2 is conjectured in [1] in the real case. We show in this paper the validity of this conjecture. Moreover, we characterize the graphs with sparsity order 2 in the complex case and we give a decomposition result for the graphs with sparsity order in both real and complex cases. As an application, these graphs can be recognized in polynomial time. We also indicate how an inequality from [17] relating the sparsity order of a graph and its minimum fill-in can be derived from a result concerning the dimension of the faces of the cone . Received August 31, 1998/Revised April 26, 2000  相似文献   

11.
This paper deals with exponential neighborhoods for combinatorial optimization problems. Exponential neighborhoods are large sets of feasible solutions whose size grows exponentially with the input length. We are especially interested in exponential neighborhoods over which the TSP (respectively, the QAP) can be solved in polynomial time, and we investigate combinatorial and algorithmical questions related to such neighborhoods.?First, we perform a careful study of exponential neighborhoods for the TSP. We investigate neighborhoods that can be defined in a simple way via assignments, matchings in bipartite graphs, partial orders, trees and other combinatorial structures. We identify several properties of these combinatorial structures that lead to polynomial time optimization algorithms, and we also provide variants that slightly violate these properties and lead to NP-complete optimization problems. Whereas it is relatively easy to find exponential neighborhoods over which the TSP can be solved in polynomial time, the corresponding situation for the QAP looks pretty hopeless: Every exponential neighborhood that is considered in this paper provably leads to an NP-complete optimization problem for the QAP. Received: September 5, 1997 / Accepted: November 15, 1999?Published online February 23, 2000  相似文献   

12.
Light Linear Logic (LLL) and Intuitionistic Light Affine Logic (ILAL) are logics that capture polynomial time computation. It is known that every polynomial time function can be represented by a proof of these logics via the proofs-as-programs correspondence. Furthermore, there is a reduction strategy which normalizes a given proof in polynomial time. Given the latter polynomial time “weak” normalization theorem, it is natural to ask whether a “strong” form of polynomial time normalization theorem holds or not. In this paper, we introduce an untyped term calculus, called Light Affine Lambda Calculus (λLA), which corresponds to ILAL. λLA is a bi-modal λ-calculus with certain constraints, endowed with very simple reduction rules. The main property of LALC is the polynomial time strong normalization: any reduction strategy normalizes a given λLA term in a polynomial number of reduction steps, and indeed in polynomial time. Since proofs of ILAL are structurally representable by terms of λLA, we conclude that the same holds for ILAL. This is a full version of the paper [21] presented at LICS 2001.  相似文献   

13.
We describe a new class of graphs for which the stability number can be obtained in polynomial time. The algorithm is based on an iterative procedure that, at each step, builds from a graph G a new graph Gl that has fewer nodes and has the property that α(Gl) = α(G) ? 1. This new class of graphs is different from the known classes for which the stability number can be computed in polynomial time. © 1993 John Wiley & Sons, Inc.  相似文献   

14.
The output frequency response function (OFRF) of Volterra systems can be described as a polynomial function of model parameters. However, the analytical determination of the OFRF is very computationally intensive, especially for higher order OFRF. To circumvent this problem, a numerical method can be adopted, provided that a series of simulation or experimental data for this polynomial function are given. In this study, it is theoretically shown that the analytical parametric relationship of OFRF up to any order can be determined accurately by using a simple Least Square method and every specific component of the output spectrum can also be determined explicitly, based on the OFRF's parametric characteristics. Practical techniques to obtain a unique and accurate solution for the Least Square method are discussed. This study provides a fundamental result for the determination of the analytical parametric relationship for this kind of system polynomial functions by using numerical methods.  相似文献   

15.
One way functions and pseudorandom generators   总被引:2,自引:0,他引:2  
Pseudorandom generators transform in polynomial time a short random “seed” into a long “pseudorandom” string. This string cannot be random in the classical sense of [6], but testing that requires an unrealistic amount of time (say, exhaustive search for the seed). Such pseudorandom generators were first discovered in [2] assuming that the function (a x modb) is one-way, i.e., easy to compute, but hard to invert on a noticeable fraction of instances. In [12] this assumption was generalized to the existence of any one-way permutation. The permutation requirement is sufficient but still very strong. It is unlikely to be proven necessary, unless something crucial, like P=NP, is discovered. Below, among other observations, a weaker assumption about one-way functions is proposed, which is not only sufficient, but also necessary for the existence of pseudorandom generators. Supported by NSF grant #DCR-8304498, DCR-8607492.  相似文献   

16.
We consider the problem of Lagrange polynomial interpolation in high or countably infinite dimension, motivated by the fast computation of solutions to partial differential equations (PDEs) depending on a possibly large number of parameters which result from the application of generalised polynomial chaos discretisations to random and stochastic PDEs. In such applications there is a substantial advantage in considering polynomial spaces that are sparse and anisotropic with respect to the different parametric variables. In an adaptive context, the polynomial space is enriched at different stages of the computation. In this paper, we study an interpolation technique in which the sample set is incremented as the polynomial dimension increases, leading therefore to a minimal amount of PDE solving. This construction is based on the standard principle of tensorisation of a one-dimensional interpolation scheme and sparsification. We derive bounds on the Lebesgue constants for this interpolation process in terms of their univariate counterpart. For a class of model elliptic parametric PDE’s, we have shown in Chkifa et al. (Modél. Math. Anal. Numér. 47(1):253–280, 2013) that certain polynomial approximations based on Taylor expansions converge in terms of the polynomial dimension with an algebraic rate that is robust with respect to the parametric dimension. We show that this rate is preserved when using our interpolation algorithm. We also propose a greedy algorithm for the adaptive selection of the polynomial spaces based on our interpolation scheme, and illustrate its performance both on scalar valued functions and on parametric elliptic PDE’s.  相似文献   

17.
We consider discrete bilevel optimization problems where the follower solves an integer program with a fixed number of variables. Using recent results in parametric integer programming, we present polynomial time algorithms for pure and mixed integer bilevel problems. For the mixed integer case where the leader’s variables are continuous, our algorithm also detects whether the infimum cost fails to be attained, a difficulty that has been identified but not directly addressed in the literature. In this case, it yields a “better than fully polynomial time” approximation scheme with running time polynomial in the logarithm of the absolute precision. For the pure integer case where the leader’s variables are integer, and hence optimal solutions are guaranteed to exist, we present an algorithm which runs in polynomial time when the total number of variables is fixed.  相似文献   

18.
In the research it is frequently assumed that the growth curve is a polynomial in time. In practice, researchers mainly use higher-order polynomials to obtain more precise estimates. But this method has many defects, such as the model can be easily affected by outliers and the polynomial hypothesis may be much strong in practice. So in this paper we first proposed nonparametric approach, local polynomial, instead of parametric method for estimation in growth curve model. We give the nonparametric growth curve model, and its nonparametric estimation. Then discuss the large sample character of local polynomial estimate. The ideal theoretical choice of a local bandwidth is also discussed in detail in this paper. Finally, through the simulation study, from the fitting curve and average square error box plot we can clearly see that the performance of nonparametric approach is much better than parametric technique.  相似文献   

19.
We construct a probabilistic polynomial time algorithm that computes the mixed discriminant of given n positive definite matrices within a 2 O(n) factor. As a corollary, we show that the permanent of an nonnegative matrix and the mixed volume of n ellipsoids in R n can be computed within a 2 O(n) factor by probabilistic polynomial time algorithms. Since every convex body can be approximated by an ellipsoid, the last algorithm can be used for approximating in polynomial time the mixed volume of n convex bodies in R n within a factor n O(n) . Received July 10, 1995, and in revised form May 20, 1996.  相似文献   

20.
In this article, we study the minimal polynomials of parametric matrices. Using the concept of (comprehensive) Gröbner systems for parametric ideals, we introduce the notion of a minimal polynomial system for a parametric matrix, i.e. we decompose the space of parameters into a finite set of cells and for each cell we give the corresponding minimal polynomial of the matrix. We also present an algorithm for computing a minimal polynomial system for a given parametric matrix.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号