首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The paper deals with the notion of analytic complexity introduced by V.K. Beloshapka. We give an algorithm which allows one to check whether a bivariate analytic function belongs to the second class of analytic complexity. We also provide estimates for the analytic complexity of classical discriminants and introduce the notion of analytic complexity of a knot.  相似文献   

2.
A pseudo-natural algorithm for the word problem of a finitely presented group is an algorithm which not only tells us whether or not a word w equals 1 in the group but also gives a derivation of 1 from w when w equals 1. In [13], [14] Madlener and Otto show that, if we measure complexity of a primitive recursive algorithm by its level in the Grzegorczyk hierarchy, there are groups in which a pseudo-natural algorithm is arbitrarily more complicated than an algorithm which simply solves the word problem. In a given group the lowest degree of complexity that can be realised by a pseudo-natural algorithm is essentially the derivational complexity of that group. Thus the result separates the derivational complexity of the word problem of a finitely presented group from its intrinsic complexity. The proof given in [13] involves the construction of a finitely presented group G from a Turing machine T such that the intrinsic complexity of the word problem for G reflects the complexity of the halting problem of T, while the derivational complexity of the word problem for G reflects the runtime complexity of T. The proof of one of the crucial lemmas in [13] is only sketched, and part of the purpose of this paper is to give the full details of this proof. We will also obtain a variant of their proof, using modular machines rather than Turing machines. As for several other results, this simplifies the proofs considerably. MSC: 03D40, 20F10.  相似文献   

3.
The complexity of a computational problem is the order of computational resources which are necessary and sufficient to solve the problem. The algorithm complexity is the cost of a particular algorithm. We say that a problem has polynomial complexity if its computational complexity is a polynomial in the measure of input size. We introduce polynomial time algorithms based in generating functions for computing the Myerson value in weighted voting games restricted by a tree. Moreover, we apply the new generating algorithm for computing the Myerson value in the Council of Ministers of the European Union restricted by a communication structure.  相似文献   

4.
We state a tree labeling problem, give an algorithm for solving it, and discuss some complexity characteristics of the algorithm. Furthermore, we discuss applications of these results to the approximation of a continuous function with given accuracy by linear combinations of characteristic functions of dyadic intervals with as few summands as possible. We also touch upon issues concerning tree-aided signal discretization.  相似文献   

5.
We consider the bilinear complexity of certain sets of bilinear forms. We study a class of direct sums of bilinear forms. For this class of problems we show that the bilinear complexity of one direct sum is the sum of the bilinear complexities of the summands and that every minimal bilinear algorithm for computing the direct sums is a direct-sum algorithm. We also exhibit some sets of bilinear forms which belong to this class.  相似文献   

6.
We study the integration of functions with respect to an unknown density. Information is available as oracle calls to the integrand and to the non-normalized density function. We are interested in analyzing the integration error of optimal algorithms (or the complexity of the problem) with emphasis on the variability of the weight function. For a corresponding large class of problem instances we show that the complexity grows linearly in the variability, and the simple Monte Carlo method provides an almost optimal algorithm. Under additional geometric restrictions (mainly log-concavity) for the density functions, we establish that a suitable adaptive local Metropolis algorithm is almost optimal and outperforms any non-adaptive algorithm.  相似文献   

7.
Mathematical model and optimization in production investment   总被引:4,自引:0,他引:4  
We study a kind of nonlinear programming models derived from various investment problems, like industrial production investment, educational investment, farming investment, etc. We will analyze the properties of solutions of the models and obtain a polynomial algorithm. We will also apply our theory to a concrete example to demonstrate the algorithm complexity.  相似文献   

8.
Meta-heuristics are a powerful way to approximately solve hard combinatorial optimization problems. However, for a problem, the quality of results can vary considerably from one instance to another. Understanding such a behaviour is important from a theoretical point of view, but also has practical applications such as for the generation of instances during the evaluation stage of a heuristic.In this paper we propose a new complexity measure for the Quadratic Assignment Problem in the context of metaheuristics based on local search, e.g. simulated annealing. We show how the ruggedness coefficient previously introduced by the authors, in conjunction with the well known concept of dominance, provides important features of the search space explored during a local search algorithm, and gives a rather precise idea of the complexity of an instance for these heuristics. We comment previous experimental studies concerning tabu search methods and genetic algorithms with local search in the light of our complexity measure. New computational results with simulated annealing and taboo search are presented.  相似文献   

9.
The algorithm by Bar-Yehuda, Goldreich and Itai is one of the best known randomized broadcast algorithms for radio networks. Its probability of success and time complexity are nearly optimal. We propose a modification of this algorithm, which decreases the communication complexity, preserving other properties. Moreover, we show that the local communication complexity of the modified algorithm is deterministic.  相似文献   

10.
We study the multivariate Feynman–Kac path integration problem. This problem was studied in Plaskota et al. (J. Comp. Phys. 164 (2000) 335) for the univariate case. We describe an algorithm based on uniform approximation, instead of the L2-approximation used in Plaskota et al. (2000). Similarly to Plaskota et al. (2000), our algorithm requires extensive precomputing. We also present bounds on the complexity of our problem. The lower bound is provided by the complexity of a certain integration problem, and the upper bound by the complexity of the uniform approximation problem. The algorithm presented in this paper is almost optimal for the classes of functions for which uniform approximation and integration have roughly the same complexities.  相似文献   

11.
We present several complexity results related to generation and counting of all circuits of an independence system. Our motivation to study these problems is their relevance in the solution of resource constrained scheduling problems, where an independence system arises as the subsets of jobs that may be scheduled simultaneously. We are interested in the circuits of this system, the so-called minimal forbidden sets, which are minimal subsets of jobs that must not be scheduled simultaneously. As a consequence of the complexity results for general independence systems, we obtain several complexity results in the context of resource constrained scheduling. On that account, we propose and analyze a simple backtracking algorithm that generates all minimal forbidden sets for such problems. The performance of this algorithm, in comparison to a previously suggested divide-and-conquer approach, is evaluated empirically using instances from the project scheduling library PSPLIB.Acknowledgement We appreciate the input of two anonymous referees. Particularly the deep remarks of one of them greatly improved our understanding of several issues; he also suggested the simplified Example 1. We thank Marc Pfetsch and Alexander Grigoriev for several enlightening discussions. Marc Pfetsch also pointed us to the paper [15]. Parts of this work were done while the authors were PhD students at the Technische Universität Berlin, Germany, where Frederik Stork was supported by DFG grant Mo 446/3-3, and Marc Uetz was supported by bmb+f grant 03-MO7TU1-3 and GIF grant I 246-304.02/97.  相似文献   

12.
We replace the usual setting for error-correcting codes (i.e. vector spaces over finite fields) with that of permutation groups. We give an algorithm which uses a combinatorial structure which we call an uncovering-by-bases, related to covering designs, and construct some examples of these. We also analyse the complexity of the algorithm.We then formulate a conjecture about uncoverings-by-bases, for which we give some supporting evidence and prove for some special cases. In particular, we consider the case of the symmetric group in its action on 2-subsets, where we make use of the theory of graph decompositions. Finally, we discuss the implications this conjecture has for the complexity of the decoding algorithm.  相似文献   

13.
We revisit the problem of computing the topology and geometry of a real algebraic plane curve. The topology is of prime interest but geometric information, such as the position of singular and critical points, is also relevant. A challenge is to compute efficiently this information for the given coordinate system even if the curve is not in generic position. Previous methods based on the cylindrical algebraic decomposition use sub-resultant sequences and computations with polynomials with algebraic coefficients. A novelty of our approach is to replace these tools by Gröbner basis computations and isolation with rational univariate representations. This has the advantage of avoiding computations with polynomials with algebraic coefficients, even in non-generic positions. Our algorithm isolates critical points in boxes and computes a decomposition of the plane by rectangular boxes. This decomposition also induces a new approach for computing an arrangement of polylines isotopic to the input curve. We also present an analysis of the complexity of our algorithm. An implementation of our algorithm demonstrates its efficiency, in particular on high-degree non-generic curves.  相似文献   

14.
This paper presents the convergence proof and complexity analysis of an interior-point framework that solves linear programming problems by dynamically selecting and adding relevant inequalities. First, we formulate a new primal–dual interior-point algorithm for solving linear programmes in non-standard form with equality and inequality constraints. The algorithm uses a primal–dual path-following predictor–corrector short-step interior-point method that starts with a reduced problem without any inequalities and selectively adds a given inequality only if it becomes active on the way to optimality. Second, we prove convergence of this algorithm to an optimal solution at which all inequalities are satisfied regardless of whether they have been added by the algorithm or not. We thus provide a theoretical foundation for similar schemes already used in practice. We also establish conditions under which the complexity of such algorithm is polynomial in the problem dimension and address remaining limitations without these conditions for possible further research.  相似文献   

15.
In the present paper, we consider a method for improving the computational algorithm for finding the factorials of integers; this method linearly accelerates the computations. We also present several algorithms effectively realizing this method and analyze their complexity.  相似文献   

16.
We investigate a fast algorithm, introduced by Brenier, which computes the Legendre-Fenchel transform of a real-valued function. We generalize his work to boxed domains and introduce a parameter in order to build an iterative algorithm. The new approach of separating primal and dual spaces allows a clearer understanding of the algorithm and yields better numerical behavior. We extend known complexity results and give new ones about the convergence of the algorithm.  相似文献   

17.
一个改进的线性规划预校正算法   总被引:6,自引:0,他引:6  
本文我们提出了一个改进型线性规划预校正算法,我们的预步和校正步方向与Mizuno-Todd-Ye[4]的方向是不同的.我们的算法的迭代复杂度为,然而在校正步,我们降低对偶间隙一个常数因子.  相似文献   

18.
Hawkes processes are important in point process theory and its applications, and simulation of such processes are often needed for various statistical purposes. This article concerns a simulation algorithm for unmarked and marked Hawkes processes, exploiting that the process can be constructed as a Poisson cluster process. The algorithm suffers from edge effects but is much faster than the perfect simulation algorithm introduced in our previous work Møller and Rasmussen (2004). We derive various useful measures for the error committed when using the algorithm, and we discuss various empirical results for the algorithm compared with perfect simulations. Extensions of the algorithm and the results to more general types of marked point processes are also discussed.  相似文献   

19.
《Journal of Complexity》1993,9(2):291-312
We study the average case complexity of multivariate integration for the class of smooth functions equipped with the folded Wiener sheet measure. The complexity is derived by reducing this problem to multivariate integration in the worst case setting but for a different space of functions. Fully constructive optimal information and an optimal algorithm are presented. Next, fully constructive almost optimal information and an almost optimal algorithm are also presented which have some advantages for practical implementation.  相似文献   

20.
We consider the problem of projecting a vector on the intersection of a hyperplane and a box in Rn. This paper extends a previous result of Maculan, Minoux, and Plateau (Ref. 1) concerning the projection of a vector on the intersection of a hyperplane and Rn +. We present an O(n) time algorithm based on the linear-time median-finding algorithm. This algorithm determines the median of the components of the vector to be projected. Computational results are also presented in order to evaluate the algorithm and its time complexity. We consider two sets of instances which are randomly generated for any given n. The algorithm was successful in solving all the instances in a reasonable time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号