首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We study exact algorithms for the MAX-CUT problem. Introducing a new technique, we present an algorithmic scheme that computes a maximum cut in graphs with bounded maximum degree. Our algorithm runs in time O*(2(1-(2/Δ))n). We also describe a MAX-CUT algorithm for general graphs. Its time complexity is O*(2mn/(m+n)). Both algorithms use polynomial space.  相似文献   

2.
Promethee II is a prominent method for multi-criteria decision aid (MCDA) that builds a complete ranking on a set of potential actions by assigning each of them a so-called net flow score. However, to calculate these scores, each pair of actions has to be compared, causing the computational load to increase quadratically with the number of actions, eventually leading to prohibitive execution times for large decision problems. For some problems, however, a trade-off between the ranking’s accuracy and the required evaluation time may be acceptable. Therefore, we propose a piecewise linear model that approximates Promethee II’s net flow scores and reduces the computational complexity (with respect to the number of actions) from quadratic to linear at the cost of some wrongly ranked actions. Simulations on artificial problem instances allow us to quantify this time/quality trade-off and to provide probabilistic bounds on the problem size above which our model satisfyingly approximates Promethee II’s rankings. They show, for instance, that for decision problems of 10,000 actions evaluated on 7 criteria, the Pearson correlation coefficient between the original scores and our approximation is of at least 0.97. When put in balance with computation times that are more than 7000 times faster than for the Promethee II model, the proposed approximation model represents an interesting alternative for large problem instances.  相似文献   

3.
In this paper, we consider the Online Target Date Assignment Problem (OnlineTDAP) for general downstream problems, where the downstream cost are nonnegative, additive and satisfy the triangle inequality.We analyze algorithm smart, which was introduced by Angelelli et al. [3] and give its exact competitive ratio depending on the number of requests. Since the obtained competitive ratio is at most we answer the question posed in Angelelli et al. [4] if smart has a competitive ratio strictly less than 2.Moreover, we introduce a new algorithm called clever and show that this strategy has a competitive ratio of 3/2. We show that this is asymptotically optimal by proving that no online algorithm can perform better than 3/2−ε.  相似文献   

4.
We consider a generalization of the classical MAX-CUT problem where two objective functions are simultaneously considered. We derive some theorems on the existence and the non-existence of feasible cuts that are at the same time near optimal for both criteria. Furthermore, two approximation algorithms with performance guarantee are presented. The first one is deterministic while the second one is randomized. A generalization of these results is given for the bi-criteria MAX-k-CUT problem.  相似文献   

5.
We present differential approximation results (both positive and negative) for optimal satisfiability, optimal constraint satisfaction, and some of the most popular restrictive versions of them. As an important corollary, we exhibit an interesting structural difference between the landscapes of approximability classes in standard and differential paradigms.  相似文献   

6.
We outline a relatively new research agenda aiming at building a new approximation paradigm by matching two distinct domains, the polynomial approximation and the exact solution of NP -hard problems by algorithms with guaranteed and non-trivial upper complexity bounds. We show how one can design approximation algorithms achieving ratios that are “forbidden” in polynomial time (unless a very unlikely complexity conjecture is confirmed) with worst-case complexity much lower than that of an exact computation.  相似文献   

7.
Adaptive quadrature codes process a collection of subintervals one at a time. We show how to process them all simultaneously and so exploit vectorization and the use of fast built-in functions and array operations that are so important to efficient computation in MATLAB. Using algebraic transformations we have made it just as easy for users to solve problems on infinite intervals and with moderate end point singularities as problems with finite intervals and smooth integrands. Piecewise-smooth integrands are handled effectively with breakpoints.  相似文献   

8.
9.
This paper proposes a new algorithm for a two-dimensional packing problem first studied by Baker, Coffman, and Rivest (SIAM J. Comput.9, No. 4(1980), 846–855). In their model, a finite list of rectangles is to be packed into a rectangular bin of finite width but infinite height. The model has applications to certain scheduling and stock-cutting problems. Since the problem of finding an optimal packing is NP-hard, previous work has been directed at finding polynomial approximation algorithms for the problem, i.e., algorithms which come within a constant times the height used by an optimal packing. For the algorithm proposed in this paper, the ratio of the height obtained by the algorithm and the height used by an optimal packing is asymptotically bounded by 54. This bound is an improvement over the bound of 43 achieved by the best previous algorithm.  相似文献   

10.
With standard linear programming solvers there is always some uncertainty about the precise values of the optimal solutions. We implemented a program using exact rational arithmetic to compute proofs for the feasibility and optimality of an LP solution. This paper reports the exact optimal objective values for all NETLIB problems.  相似文献   

11.
We describe how to maintain the triangular factor of a sparse QR factorization when columns are added and deleted and Q cannot be stored for sparsity reasons. The updating procedures could be thought of as a sparse counterpart of Reichel and Gragg’s package QRUP. They allow us to solve a sequence of sparse linear least squares subproblems in which each matrix Bk is an independent subset of the columns of a fixed matrix A, and Bk+1 is obtained by adding or deleting one column. Like Coleman and Hulbert [T. Coleman, L. Hulbert, A direct active set algorithm for large sparse quadratic programs with simple bounds, Math. Program. 45 (1989) 373-406], we adapt the sparse direct methodology of Björck and Oreborn of the late 1980s, but without forming ATA, which may be only positive semidefinite. Our Matlab 5 implementation works with a suitable row and column numbering within a static triangular sparsity pattern that is computed in advance by symbolic factorization of ATA and preserved with placeholders.  相似文献   

12.
We present an algorithm to compute the pointlike subsets of a finite semigroup with respect to the pseudovariety of all finite R-trivial semigroups. The algorithm is inspired by Henckell’s algorithm for computing the pointlike subsets with respect to the pseudovariety of all finite aperiodic semigroups. We also give an algorithm to compute -pointlike sets, where denotes the pseudovariety of all finite J-trivial semigroups. We finally show that, in contrast with the situation for , the natural adaptation of Henckell’s algorithm to computes pointlike sets, but not all of them.  相似文献   

13.
Summary. The use of mixed finite element methods is well-established in the numerical approximation of the problem of nearly incompressible elasticity, and its limit, Stokes flow. The question of stability over curved elements for such methods is of particular significance in the p version, where, since the element size remains fixed, exact representation of the curved boundary by (large) elements is often used. We identify a mixed element which we show to be optimally stable in both p and h refinement over curvilinear meshes. We prove optimal p version (up to ) and h version (p = 2, 3) convergence for our element, and illustrate its optimality through numerical experiments. Received August 25, 1998 / Revised version received February 16, 1999 / Published online April 20, 2000 –? Springer-Verlag 2000  相似文献   

14.

Text

In this paper, Chen's iterated integrals are generalized by interpolation of functions of the positive integer number of times which particular forms are iterated in integrals along specific paths, to certain complex values. These generalized iterated integrals satisfy both an additive iterative property and comultiplication formula. In a particular example, a (non-classical) multiplicative iterative property is also shown to hold. After developing this theory in the first part of the paper we discuss various applications, including the expression of certain zeta functions as complex iterated integrals (from which an obstruction to the existence of a contour integration proof of the functional equation for the Dedekind zeta function emerges); a way of thinking about complex iterated derivatives arising from a reformulation of a result of Gel'fand and Shilov in the theory of distributions; and a direct topological proof of the monodromy of polylogarithms.

Video

For a video summary of this paper, please click here or visit http://www.youtube.com/watch?v=dsVvo7s8BYU.  相似文献   

15.
Earliest deadline first (edf) is a widely used algorithm for online deadline scheduling. It has been known for long that edf is optimal for scheduling an underloaded, single-processor system; recent results on the extra-resource analysis of edf further revealed that edf when using moderately faster processors can achieve optimal performance in the underloaded, multi-processor setting. This paper initiates the extra-resource analysis of edf for overloaded systems, showing that edf supplemented with a simple form of admission control can provide a similar performance guarantee in both the single and multi-processor settings.  相似文献   

16.
Polynomial time approximation schemes and parameterized complexity   总被引:3,自引:0,他引:3  
In this paper, we study the relationship between the approximability and the parameterized complexity of NP optimization problems. We introduce a notion of polynomial fixed-parameter tractability and prove that, under a very general constraint, an NP optimization problem has a fully polynomial time approximation scheme if and only if the problem is polynomial fixed-parameter tractable. By enforcing a constraint of planarity on the W-hierarchy studied in parameterized complexity theory, we obtain a class of NP optimization problems, the planar W-hierarchy, and prove that all problems in this class have efficient polynomial time approximation schemes (EPTAS). The planar W-hierarchy seems to contain most of the known EPTAS problems, and is significantly different from the class introduced by Khanna and Motwani in their efforts in characterizing optimization problems with polynomial time approximation schemes.  相似文献   

17.
The best approximation algorithm for Max Cut in graphs of maximum degree 3 uses semidefinite programming, has approximation ratio 0.9326, and its running time is Θ(n3.5logn); but the best combinatorial algorithms have approximation ratio 4/5 only, achieved in O(n2) time [J.A. Bondy, S.C. Locke, J. Graph Theory 10 (1986) 477–504; E. Halperin, et al., J. Algorithms 53 (2004) 169–185]. Here we present an improved combinatorial approximation, which is a 5/6-approximation algorithm that runs in O(n2) time, perhaps improvable even to O(n). Our main tool is a new type of vertex decomposition for graphs of maximum degree 3.  相似文献   

18.
Multiple criteria sorting aims at assigning alternatives evaluated on several criteria to predefined ordered categories. In this paper, we consider a well known multiple criteria sorting method, Electre Tri, which involves three types of preference parameters: (1) category limits defining the frontiers between consecutive categories, (2) weights and majority level specifying which coalitions form a majority, and (3) veto thresholds characterizing discordance effects. We propose an elicitation procedure to infer category limits from assignment examples provided by multiple decision makers. The procedure computes a set of category limits and vetoes common to all decision makers, with variable weights for each decision maker. Hence, the method helps reaching a consensus among decision makers on the category limits and veto thresholds, whereas finding a consensus on weights is left aside. The inference procedure is based on mixed integer linear programming and performs well even for datasets corresponding to real-world decision problems. We provide an illustrative example of the use of the method and analyze the performance of the proposed algorithms.  相似文献   

19.
We describe a simple computing technique for the tournament choice problem. It rests upon relational modeling and uses the BDD-based computer system RelView for the evaluation of the relation-algebraic expressions that specify the solutions and for the visualization of the computed results. The Copeland set can immediately be identified using RelView’s labeling feature. Relation-algebraic specifications of the Condorcet non-losers, the Schwartz set, the top cycle, the uncovered set, the minimal covering set, the Banks set, and the tournament equilibrium set are delivered. We present an example of a tournament on a small set of alternatives, for which the above choice sets are computed and visualized via RelView. The technique described in this paper is very flexible and especially appropriate for prototyping and experimentation, and as such very instructive for educational purposes. It can easily be applied to other problems of social choice and game theory.  相似文献   

20.
George Baron immigrated to the United States from England at the end of the eighteenth century. He edited the first mathematical journal in the United States, the Mathematical Correspondent. This journal was strongly influenced by the popular English journals that contained mathematical problems and their solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号