首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The complexity of a computational problem is the order of computational resources which are necessary and sufficient to solve the problem. The algorithm complexity is the cost of a particular algorithm. We say that a problem has polynomial complexity if its computational complexity is a polynomial in the measure of input size. We introduce polynomial time algorithms based in generating functions for computing the Myerson value in weighted voting games restricted by a tree. Moreover, we apply the new generating algorithm for computing the Myerson value in the Council of Ministers of the European Union restricted by a communication structure.  相似文献   

2.
We study summation of sequences and integration in the quantum model of computation. We develop quantum algorithms for computing the mean of sequences that satisfy a p-summability condition and for integration of functions from Lebesgue spaces Lp([0, 1]d), and analyze their convergence rates. We also prove lower bounds showing that the proposed algorithms are, in many cases, optimal within the setting of quantum computing. This extends recent results of G. Brassard et al. (2000, “Quantum Amplitude Amplification and Estimation,” Technical Report, http://arXiv.org/abs/quant-ph/0005055) on computing the mean for bounded sequences and complements results of E. Novak (2001, J. Complexity17, 2–16) on integration of functions from Hölder classes. The analysis requires an appropriate model of quantum computation, capable of covering the typical features of numerical problems such as dealing with real numbers and real-valued functions and with vector and function spaces. We develop and study such a model, which can be viewed as a quantum setting for information-based complexity theory.  相似文献   

3.
The multiplicative complexity of a finite set of rational functions is the number of essential multiplications and divisions that are necessary and sufficient to compute these rational functions. We prove that the multiplicative complexity of inversion in the division algebra \H of Hamiltonian quaternions over the reals, that is, the multiplicative complexity of the coordinates of the inverse of a generic element from \H , is exactly eight. Furthermore, we show that the multiplicative complexity of the left and right division of Hamiltonian quaternions is at least eleven. July 17, 2001. Final version received: October 8, 2001.  相似文献   

4.
We present a general diagrammatic approach to the construction of efficient algorithms for computing the Fourier transform of a function on a finite group. By extending work which connects Bratteli diagrams to the construction of Fast Fourier Transform algorithms we make explicit use of the path algebra connection to the construction of Gel’fand–Tsetlin bases and work in the setting of quivers. We relate this framework to the construction of a configuration space derived from a Bratteli diagram. In this setting the complexity of an algorithm for computing a Fourier transform reduces to the calculation of the dimension of the associated configuration space. Our methods give improved upper bounds for computing the Fourier transform for the general linear groups over finite fields, the classical Weyl groups, and homogeneous spaces of finite groups, while also recovering the best known algorithms for the symmetric group.  相似文献   

5.
6.
We give three algorithms for computing the parent of a node in a threaded binary tree, and calculate the average case complexity of each. By comparing these to the unit cost of obtaining the parent of a node with an explicit parent-pointer field, it is possible to balance runtime and storage cost with respect to the task of finding parent nodes in binary trees. The results obtained show that, although the worst case complexity for ann-node tree is obviouslyO(n) for all three algorithms, the average case complexity for two input distributions is asymptotic (from below) to either 3 or 2.  相似文献   

7.
The discrete prolate spheroidal sequences (DPSS's) provide an efficient representation for discrete signals that are perfectly timelimited and nearly bandlimited. Due to the high computational complexity of projecting onto the DPSS basis – also known as the Slepian basis – this representation is often overlooked in favor of the fast Fourier transform (FFT). We show that there exist fast constructions for computing approximate projections onto the leading Slepian basis elements. The complexity of the resulting algorithms is comparable to the FFT, and scales favorably as the quality of the desired approximation is increased. In the process of bounding the complexity of these algorithms, we also establish new nonasymptotic results on the eigenvalue distribution of discrete time–frequency localization operators. We then demonstrate how these algorithms allow us to efficiently compute the solution to certain least-squares problems that arise in signal processing. We also provide simulations comparing these fast, approximate Slepian methods to exact Slepian methods as well as the traditional FFT based methods.  相似文献   

8.
This work is concerned with the algorithmic reachability analysis of continuous-time linear systems with constrained initial states and inputs. We propose an approach for computing an over-approximation of the set of states reachable on a bounded time interval. The main contribution over previous works is that it allows us to consider systems whose sets of initial states and inputs are given by arbitrary compact convex sets represented by their support functions. We actually compute two over-approximations of the reachable set. The first one is given by the union of convex sets with computable support functions. As the representation of convex sets by their support function is not suitable for some tasks, we derive from this first over-approximation a second one given by the union of polyhedrons. The overall computational complexity of our approach is comparable to the complexity of the most competitive available specialized algorithms for reachability analysis of linear systems using zonotopes or ellipsoids. The effectiveness of our approach is demonstrated on several examples.  相似文献   

9.
《Optimization》2012,61(12):2291-2323
ABSTRACT

We study and solve the two-stage stochastic extended second-order cone programming problem. We show that the barrier recourse functions and the composite barrier functions for this optimization problem are self-concordant families with respect to barrier parameters. These results are used to develop primal decomposition-based interior-point algorithms. The worst case iteration complexity of the developed algorithms is shown to be the same as that for the short- and long-step primal interior algorithms applied to the extensive formulation of our problem.  相似文献   

10.
Alberto Seeger  Mounir Torki 《TOP》2014,22(2):716-738
We introduce an axiomatic formalism for the concept of the center of a set in a Euclidean space. Then we explain how to exploit possible symmetries and possible cyclicities in the set in order to localize its center. Special attention is paid to the determination of centers in cones of matrices. Despite its highly abstract flavor, our work has a strong connection with convex optimization theory. In fact, computing the so-called “incenter” of a solid closed convex cone is a matter of solving a nonsmooth convex optimization program. On the other hand, the concept of the incenter of a solid closed convex cone has a bearing on the complexity analysis and design of algorithms for convex optimization programs under conic constraints.  相似文献   

11.
In this paper we prove that for a certain class of systems of bilinear forms, all minimal division-free algorithms are essentially bilinear. This class includes systems for computing products in finite algebraic extension fields, and systems for computing the products of Toeplitz and Hankel matrices with vectors. Our results, together with the classification theorems of 10., 12., 169–180) completely describe all minimal division-free algorithms for computing these systems. We also prove, as an immediate consequence of our results, that the multiplicative complexity of the quaternion product over a real field is 8.  相似文献   

12.
We present a new algorithm for computing the ideal class group of an imaginary quadratic order which is based on the multiple polynomial version of the quadratic sieve factoring algorithm. Although no formal analysis is given, we conjecture that our algorithm has sub-exponential complexity, and computational experience shows that it is significantly faster in practice than existing algorithms.

  相似文献   


13.
Two simply typed term systems and are considered, both for representing algorithms computing primitive recursive functions. is based on primitive recursion, on recursion on notation. A purely syntactical method of determining the computational complexity of algorithms in , called $\mu$ -measure, is employed to uniformly integrate traditional results in subrecursion theory with resource-free characterisations of sub-elementary complexity classes. Extending the Schwichtenberg and Müller characterisation of the Grzegorczyk classes for , it is shown $\mathcal{E}_{n+1} = \mathcal{R}^n_1n\ge 1\mathcal{R}^n_i$ denotes the \emph{th modified Heinermann class} based on . The proof does not refer to any machine-based computation model, unlike the Schwichtenberg and Müller proofs. This is due to the notion of modified recursion lying on top of each other provided by . By Ritchie's result, characterises the linear-space computable functions. Using the same method, a short and straightforward proof is presented, showing that characterises the polynomial time computable functions. Furthermore, the classes and coincide at and above level 2. Received: 22 September 1997 / Revised version: 12 May 1999  相似文献   

14.
The Stem algorithm and Steuer's filtering procedure were offered to 142 subjects to assist them in solving six car buying problems of various complexity. Subjects could change between the algorithms and some other program options almost at will. We show that the algorithms are evaluated differently and that objective measures of the solution process show differences between the algorithms that support the subjective evaluations. If the objective task complexity is noticed (at least along one dimension: number of objective functions) we find that the choice of procedure varies with it as well as with subjective complexity evaluations. In general, Stem is favored by a minority of subjects, preferably for smaller sized problems.  相似文献   

15.
We describe a general procedure for computing Stokes matrices for solutions of linear differential equations with polynomial coefficients. The algorithms developed make an automation of the calculations possible, for a wide class of equations. We apply our techniques to some classical holonomic functions and also for some new special functions that are interesting in their own right: Ecalle’s accelerating functions.   相似文献   

16.
We consider the problem of computing numerically the boundary control for the wave equation. It is by now well known that, due to high frequency spurious oscillations, numerical instabilities occur and may led to the failure of convergence of some apparently natural numerical algorithms. Several remedies have been proposed in the literature to compensate this fact: Tychonoff regularization, Fourier filtering, mixed finite elements,… In this Note we prove that the two-grid method proposed by Glowinski (J. Comput. Phys. 103 (2) (1992) 189–221) does indeed provide a convergent algorithm. This is done in the context of the finite-difference semi-discrete approximation of the 1-d wave equation. To cite this article: M. Negreanu, E. Zuazua, C. R. Acad. Sci. Paris, Ser. I 338 (2004).  相似文献   

17.
Summary  Computational methods for spline smoothing are studied in the context of the linear smoothing spline. Comparisons are made between two efficient methods for computing the estimator using band-limited basis functions and the Kalman filter. In particular, the Kalman filter approach is shown to be an efficient method for computing under the Kimeldorf-Wahba representation for the estimator. Run time comparisons are made between band-limited B-spline and Kalman filter based algorithms.  相似文献   

18.
19.
Starting from certain rational varieties blown-up from N(P1), we construct a tropical, i.e., subtraction-free birational, representation of Weyl groups as a group of pseudo-isomorphisms of the varieties. We develop an algebro-geometric framework of τ-functions as defining functions of exceptional divisors on the varieties. In the case where the corresponding root system is of affine type, our construction yields a class of (higher order) q-difference Painlevé equations and its algebraic degree grows quadratically.  相似文献   

20.
In 1999, Romaguera and Schellekens introduced the theory of dual complexity spaces as a part of the development of a mathematical (topological) foundation for the complexity analysis of programs and algorithms [S. Romaguera, M.P. Schellekens, Quasi-metric properties of complexity spaces, Topology Appl. 98 (1999) 311-322]. In this work we extend the theory of dual complexity spaces to the case that the complexity functions are valued on an ordered normed monoid. We show that the complexity space of an ordered normed monoid inherits the ordered normed structure. Moreover, the order structure allows us to prove some topological and quasi-metric properties of the new dual complexity spaces. In particular, we show that these complexity spaces are, under certain conditions, Hausdorff and satisfy a kind of completeness. Finally, we develop a connection of our new approach with Interval Analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号