首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Classical fractal dimensions (Hausdorff dimension and packing dimension) have recently been effectivized by (i) characterizing them in terms of real‐valued functions called gales, and (ii) imposing computability and complexity constraints on these gales. This paper surveys these developments and their applications in algorithmic information theory and computational complexity theory. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

2.
We present a model of computation for string functions over single-sorted, total algebraic structures and study some basic features of a general theory of computability within this framework. Our concept generalizes the Blum-Shub-Smale setting of computability over the reals and other rings. By dealing with strings of arbitrary length instead of tuples of fixed length, some suppositions of deeper results within former approaches to generalized recursion theory become superfluous. Moreover, this gives the basis for introducing computational complexity in a BSS-like manner. Relationships both to classical computability and to Friedman's concept of eds computability are established. Two kinds of nondeterminism as well as several variants of recognizability are investigated with respect to interdependencies on each other and on properties of the underlying structures. For structures of finite signatures, there are universal programs with the usual characteristics. For the general case of not necessarily finite signature, this subject will be studied in a separate, forthcoming paper.  相似文献   

3.
Computability and computational complexity were first considered over the fields of real and complex numbers and generalized to arbitrary algebraic systems. This article approaches the theory of computational complexity over an arbitrary algebraic system by taking computability over the list extension for a computational model of it. We study the resultant polynomial complexity classes and mention some NP-complete problems.  相似文献   

4.
Computational models are usually defined over specific domains. For example, Turing machines are defined over strings, and the recursive functions over the natural numbers. Nevertheless, one often uses one computational model to compute functions over another domain, in which case, one is obliged to employ a representation, mapping elements of one domain into the other. For instance, Turing machines (or modern computers) are understood as computing numerical functions, by interpreting strings as numbers, via a binary or decimal representation, say.We ask: Is the choice of the domain interpretation important? Clearly, complexity is influenced, but does the representation also affect computability? Can it be that the same model computes strictly more functions via one representation than another? We show that the answer is “yes”, and further analyze the influence of domain interpretation on the extensionality of computational models (that is, on the set of functions computed by the model).We introduce the notion of interpretation-completeness for computational models that are basically unaffected by the choice of domain interpretation, and prove that Turing machines and the recursive functions are interpretation-complete, while two-counter machines are incomplete. We continue by examining issues based on model extensionality that are influenced by the domain interpretation. We suggest a notion for comparing computational power of models operating over arbitrary domains, as well as an interpretation of the Church-Turing Thesis over arbitrary domains.  相似文献   

5.
6.
《Journal of Complexity》2003,19(2):132-152
We study computability of real-valued functions and the information needed for simulation of dynamical systems. In particular, we describe application of Kolmogorov complexity theory to computer simulation of irrational rotations in a single neuron model. We deduce the information needed for a parameter in simulating the dynamics by showing a difference in Kolmogorov complexity between a computable parameter and a non-computable parameter. Finally, we show that all trajectories generated by irrational rotations are non-computable iff its parameter is non-computable.  相似文献   

7.
A theory of approximation to measurable sets and measurable functions based on the concepts of recursion theory and discrete complexity theory is developed. The approximation method uses a model of oracle Turing machines, and so the computational complexity may be defined in a natural way. This complexity measure may be viewed as a formulation of the average-case complexity of real functions—in contrast to the more restrictive worst-case complexity. The relationship between these two complexity measures is further studied and compared with the notion of the distribution-free probabilistic computation. The computational complexity of the Lebesgue integral of polynomial-time approximable functions is studied and related to the question “FP = ♯P?”.  相似文献   

8.
《Journal of Complexity》1995,11(1):57-73
In the real number model of computation one assumes that arithmetic operations with real numbers and comparisons can be done with unit cost. In numerical analysis one often deals with problems where the information is only partial. This is an essential assumption for problems defined on function spaces, such as numerical integration, zero finding, or optimization. For such problems we must usually deal with partial information consisting of function values or Fourier coefficients, because a digital computer can only handle finite sets of numbers instead of functions. In information-based complexity it is assumed that certain functionals can be evaluated by an oracle and each call of the oracle costs c, where c > 0. In this paper we describe this model of computation more carefully. We also define two variants of stochastic (or Monte Carlo) algorithms. We believe that the results of information-based complexity, see the book of Traub, Wasilkowski, and Woźniakowski (1988, "Information-Based Complexity," Academic Press, New York/London), can be reconstructed in our model with only minor changes. As an example we present a "uniform algorithm" for a "uniform problem" in numerical integration. We compare our model (in the case, where no oracle is used) with the model of Blum, Shub, and Smale (1989, Bull. Amer. Math. Soc.21, 1-46). These authors use a different cost function. Copy instructions are rather expensive in their model, while they are for free in our model. In spite of this difference we show that the sets P and NP coincide with respect to both cost functions.  相似文献   

9.
A reasonable computational complexity theory for real functions is obtained by using the modified infinite binary representation with digits 0, l, and −1 for the real numbers and Turing machines which transform with one-way output modified binary input sequences into modified binary output sequences. As the main result of this paper it is shown that there is a trade-off between the input lookahead, i.e., the deviation of online computation and the computational complexity for machines computing certain real functions.  相似文献   

10.
Computable analysis is an extension of classical discrete computability by enhancing the normal Turing machine model. It investigates mathematical analysis from the computability perspective. Though it is well developed on the computability level, it is still under developed on the complexity perspective, that is, when bounding the available computational resources. Recently Kawamura and Cook developed a framework to define the computational complexity of operators arising in analysis. Our goal is to understand the effects of complexity restrictions on the analytical properties of the operator. We focus on the case of norms over C[0, 1] and introduce the notion of dependence of a norm on a point and relate it to the query complexity of the norm. We show that the dependence of almost every point is of the order of the query complexity of the norm. A norm with small complexity depends on a few points but, as compensation, highly depends on them. We briefly show how to obtain similar results for non-deterministic time complexity. We characterize the functionals that are computable using one oracle call only and discuss the uniformity of that characterization.  相似文献   

11.
We extend the reach of fixed‐parameter analysis by introducing classes of parameterized sets defined based on decidability instead of complexity. Known results in computability theory can be expressed in the language of fixed‐parameter analysis, making use of the landscape of these new classes. On the one hand this unifies results that would not otherwise show their kinship, while on the other it allows for further exchange of insights between complexity theory and computability theory. In the landscape of our fixed‐parameter decidability classes, we recover part of the classification of real numbers according to their computability. From this, using the structural properties of the landscape, we get a new proof of the existence of P ‐selective bi‐immune sets. Furthermore, we show that parameter values in parameterized sets in our uniformly fixed‐parameter decidability classes interact with both instance complexity and Kolmogorov complexity. By deriving a parameter based upper bound on instance complexity, we demonstrate how parameters convey a sense of randomness. Motivated by the instance complexity conjecture, we show that the upper bound on the instance complexity is infinitely often also an upper bound on the Kolmogorov complexity.  相似文献   

12.
For any class of operators which transform unary total functions in the set of natural numbers into functions of the same kind, we define what it means for a real function to be uniformly computable or conditionally computable with respect to this class. These two computability notions are natural generalizations of certain notions introduced in a previous paper co-authored by Andreas Weiermann and in another previous paper by the same authors, respectively. Under certain weak assumptions about the class in question, we show that conditional computability is preserved by substitution, that all conditionally computable real functions are locally uniformly computable, and that the ones with compact domains are uniformly computable. The introduced notions have some similarity with the uniform computability and its non-uniform extension considered by Katrin Tent and Martin Ziegler, however, there are also essential differences between the conditional computability and the non-uniform computability in question.  相似文献   

13.
In the last decade, there have been several attempts to understand the relations between the many models of analog computation. Unfortunately, most models are not equivalent. Euler's Gamma function, which is computable according to computable analysis, but that cannot be generated by Shannon's General Purpose Analog Computer (GPAC), has often been used to argue that the GPAC is less powerful than digital computation. However, when computability with GPACs is not restricted to real-time generation of functions, it has been shown recently that Gamma becomes computable by a GPAC. Here we extend this result by showing that, in an appropriate framework, the GPAC and computable analysis are actually equivalent from the computability point of view, at least in compact intervals. Since GPACs are equivalent to systems of polynomial differential equations then we show that all real computable functions over compact intervals can be defined by such models.  相似文献   

14.
We give a correspondence between two notions of complexity for real functions: poly-time computability according to Ko and a notion that arises naturally when one considers the application of Mehlhorn's class of the basic feasible functionals to computable analysis. We show that both notions define the same set of polynomial-time computable real functions.  相似文献   

15.
在现有的基本初等函数的高精度快速算法基础上,进一步研究基本初等函数的加速算法.现有的基本初等函数的高精度快速算法是通过对函数进行幂级数展开的方式来实现函数的任意精度快速计算.而其加速算法则是在幂级数展开之前,先利用函数的多种性质来缩减函数的参数,减少函数在进行幂级数展开时的计算难度,提高函数的计算速度.给出了加速算法,并从计算误差和算法复杂性两方面对该算法进行了分析,给出了误差最小,算法复杂性最低的最优加速算法.然后,对于三角函数、双曲函数、指数函数以及它们的反函数,在实数域上给出了的具体的加速过程和计算结果.  相似文献   

16.
Continuing the paper [7], in which the Blum-Shub-Smale approach to computability over the reals has been generalized to arbitrary algebraic structures, this paper deals with computability and recognizability over structures of infinite signature. It begins with discussing related properties of the linear and scalar real structures and of their discrete counterparts over the natural numbers. Then the existence of universal functions is shown to be equivalent to the effective encodability of the underlying structure. Such structures even have universal functions satisfying the s-m-n theorem and related features. The real and discrete examples are discussed with respect to effective encodability. Megiddo structures and computational extensions of effectively encodable structures are encodable, too. As further variants of universality, universal functions with enumerable sets of program codes and such ones with constructible codes are investigated. Finally, the existence of m-complete sets is shown to be independent of the effective encodability of structures, and the linear and scalar structures are discussed once more, under this aspect.  相似文献   

17.
The complexity of linear programming is discussed in the “integer” and “real number” models of computation. Even though the integer model is widely used in theoretical computer science, the real number model is more useful for estimating an algorithm's running time in actual computation.Although the ellipsoid algorithm is a polynomial-time algorithm in the integer model, we prove that it has unbounded complexity in the real number model. We conjecture that there exists no polynomial-time algorithm for the linear inequalities problem in the real number model. We also conjecture that linear inequalities are strictly harder than linear equalities in all “reasonable” models of computation.  相似文献   

18.
The antichain function is a characteristic function of an antichain in the Boolean cube. The set of antichain functions is an infinite complete basis. We study the computational complexity of Boolean functions over an antichain functional basis. In this paper we prove an asymptotic lower bound of order $\sqrt n $ for the computational complexity of a linear function, a majority function, and almost all Boolean functions of n variables.  相似文献   

19.
For two nonzero polynomials[formula]the successive signed Euclidean division yields algorithms, that is, semialgebraic computation trees, for tasks such as computing the sequence of successive quotients, deciding the signs of the sequence of remainders in a pointaR, deciding the number of remainders, or deciding the degree pattern of the sequence of remainders. In this paper we show lower bounds of ordermlog2(m) for these tasks, within the computational framework of semi-algebraic computation trees. The inevitably long paths in semi-algebraic computation trees can be specified as those followed by certain prime cones in the real spectrum of a polynomial ring. We give in the paper a positive answer to the question posed in T. Lickteig (J. Pure Appl. Algebra110(2), 131–184 (1996)) whether the degree of the zero-set of the support of a prime cone provides a lower bound on the complexity of isolating the prime cone. The applications are based on the degree inequalities given by Strassen and Schuster and extend their work to the real situation in various directions.  相似文献   

20.
We study summation of sequences and integration in the quantum model of computation. We develop quantum algorithms for computing the mean of sequences that satisfy a p-summability condition and for integration of functions from Lebesgue spaces Lp([0, 1]d), and analyze their convergence rates. We also prove lower bounds showing that the proposed algorithms are, in many cases, optimal within the setting of quantum computing. This extends recent results of G. Brassard et al. (2000, “Quantum Amplitude Amplification and Estimation,” Technical Report, http://arXiv.org/abs/quant-ph/0005055) on computing the mean for bounded sequences and complements results of E. Novak (2001, J. Complexity17, 2–16) on integration of functions from Hölder classes. The analysis requires an appropriate model of quantum computation, capable of covering the typical features of numerical problems such as dealing with real numbers and real-valued functions and with vector and function spaces. We develop and study such a model, which can be viewed as a quantum setting for information-based complexity theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号