首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
We study classic machine sequencing problems in an online setting. Specifically, we look at deterministic and randomized algorithms for the problem of scheduling jobs with release dates on identical parallel machines, to minimize the sum of weighted completion times: Both preemptive and non-preemptive versions of the problem are analyzed. Using linear programming techniques, borrowed from the single machine case, we are able to design a 2.62-competitive deterministic algorithm for the non-preemptive version of the problem, improving upon the 3.28-competitive algorithm of Megow and Schulz. Additionally, we show how to combine randomization techniques with the linear programming approach to obtain randomized algorithms for both versions of the problem with competitive ratio strictly smaller than 2 for any number of machines (but approaching two as the number of machines grows). Our algorithms naturally extend several approaches for single and parallel machine scheduling. We also present a brief computational study, for randomly generated problem instances, which suggests that our algorithms perform very well in practice. A preliminary version of this work appears in the Proceedings of the 11th conference on integer programming and combinatorial optimization (IPCO), Berlin, 8–10 June 2005.  相似文献   

2.
We initiate the study of a new measure of approximation. This measure compares the performance of an approximation algorithm to the random assignment algorithm. This is a useful measure for optimization problems where the random assignment algorithm is known to give essentially the best possible polynomial time approximation. In this paper, we focus on this measure for the optimization problems Max‐Lin‐2 in which we need to maximize the number of satisfied linear equations in a system of linear equations modulo 2, and Max‐k‐Lin‐2, a special case of the above problem in which each equation has at most k variables. The main techniques we use, in our approximation algorithms and inapproximability results for this measure, are from Fourier analysis and derandomization. © 2004 Wiley Periodicals, Inc. Random Struct. Alg., 2004  相似文献   

3.
The goal of the simplified partial digest problem (SPDP) is motivated by the reconstruction of the linear structure of a DNA chain with respect to a given nucleotide pattern, based on the multiset of distances between the adjacent patterns (interpoint distances) and the multiset of distances between each pattern and the two unlabeled endpoints of the DNA chain (end distances). We consider optimization versions of the problem, called SPDP-Min and SPDP-Max. The aim of SPDP-Min (SPDP-Max) is to find a DNA linear structure with the same multiset of end distances and the minimum (maximum) number of incorrect (correct) interpoint distances. Results are presented on the worst-case efficiency of approximation algorithms for these problems. We suggest a graph-theoretic model for SPDP-Min and SPDP-Max, which can be used to reduce the search space for an optimal solution in either of these problems. We also present heuristic polynomial time algorithms based on this model. In computational experiments with randomly generated and real-life input data, our best algorithm delivered an optimal solution in 100% of the instances for a number of restriction sites not greater than 50.  相似文献   

4.
In this paper we propose a new integer programming formulation for the multilevel facility location problem and a novel 3-approximation algorithm based on LP-rounding. The linear program that we use has a polynomial number of variables and constraints, thus being more efficient than the one commonly used in the approximation algorithms for these types of problems.  相似文献   

5.
We analyze two global algorithms for solving the linear bilevel program (LBP) problem. The first one is a recent algorithm built on a new concept of equilibrium point and a modified version of the outer approximation method. The second one is an efficient branch-and-bound algorithm known in the literature. Based on computational results we propose some modifications in both algorithms to improve their computational performance. A significant number of experiments is carried out and a comparative study with the algorithms is presented. The modified procedures has better performance than the original versions.  相似文献   

6.
Motivated by a scheduling problem in multicast environments, we consider the problem of arranging a weighted graph around a circle so as to minimize the total weighted arc length. We describe the first polynomial-time approximation algorithms for this problem, and specifically an O(logn)-approximation algorithm for undirected circular arrangements and an -approximation algorithm for directed circular arrangements. We will also conduct an experimental evaluation of our algorithms and show that a simple heuristic has the best performance in simulations based on busy Web server logs.  相似文献   

7.
In k-means clustering we are given a set of n data points in d-dimensional space and an integer k, and the problem is to determine a set of k points in  , called centers, to minimize the mean squared distance from each data point to its nearest center. No exact polynomial-time algorithms are known for this problem. Although asymptotically efficient approximation algorithms exist, these algorithms are not practical due to the very high constant factors involved. There are many heuristics that are used in practice, but we know of no bounds on their performance.

We consider the question of whether there exists a simple and practical approximation algorithm for k-means clustering. We present a local improvement heuristic based on swapping centers in and out. We prove that this yields a (9+)-approximation algorithm. We present an example showing that any approach based on performing a fixed number of swaps achieves an approximation factor of at least (9−) in all sufficiently high dimensions. Thus, our approximation factor is almost tight for algorithms based on performing a fixed number of swaps. To establish the practical value of the heuristic, we present an empirical study that shows that, when combined with Lloyd's algorithm, this heuristic performs quite well in practice.  相似文献   


8.
This paper extends some adaptive schemes that have been developed for the Random Walk Metropolis algorithm to more general versions of the Metropolis-Hastings (MH) algorithm, particularly to the Metropolis Adjusted Langevin algorithm of Roberts and Tweedie (1996). Our simulations show that the adaptation drastically improves the performance of such MH algorithms. We study the convergence of the algorithm. Our proves are based on a new approach to the analysis of stochastic approximation algorithms based on mixingales theory.   相似文献   

9.
We consider the problem of scheduling a sequence of packets over a linear network, where every packet has a source and a target, as well as a release time and a deadline by which it must arrive at its target. The model we consider is bufferless, where packets are not allowed to be buffered in nodes along their paths other than at their source. This model applies to optical networks where opto-electronic conversion is costly, and packets mostly travel through bufferless hops. The offline version of this problem was previously studied in M. Adler et al. (2002) [3]. In this paper we study the online version of the problem, where we are required to schedule the packets without knowledge of future packet arrivals. We use competitive analysis to evaluate the performance of our algorithms. We present the first online algorithms for several versions of the problem. For the problem of throughput maximization, where all packets have uniform weights, we give an algorithm with a logarithmic competitive ratio, and present some lower bounds. For other weight functions, we show algorithms that achieve optimal competitive ratios.  相似文献   

10.
We study budgeted variants of classical cut problems: the Multiway Cut problem, the Multicut problem, and the k-Cut problem, and provide approximation algorithms for these problems. Specifically, for the budgeted multiway cut and the k-cut problems we provide constant factor approximation algorithms. We show that the budgeted multicut problem is at least as hard to approximate as the sparsest cut problem, and we provide a bi-criteria approximation algorithm for it.  相似文献   

11.
Probabilistic proximity searching algorithms based on compact partitions   总被引:1,自引:0,他引:1  
The main bottleneck of the research in metric space searching is the so-called curse of dimensionality, which makes the task of searching some metric spaces intrinsically difficult, whatever algorithm is used. A recent trend to break this bottleneck resorts to probabilistic algorithms, where it has been shown that one can find 99% of the relevant objects at a fraction of the cost of the exact algorithm. These algorithms are welcome in most applications because resorting to metric space searching already involves a fuzziness in the retrieval requirements. In this paper, we push further in this direction by developing probabilistic algorithms on data structures whose exact versions are the best for high dimensions. As a result, we obtain probabilistic algorithms that are better than the previous ones. We give new insights on the problem and propose a novel view based on time-bounded searching. We also propose an experimental framework for probabilistic algorithms that permits comparing them in offline mode.  相似文献   

12.
We study approximation of some well-known network design problems such as the traveling salesman problem (for both minimization and maximization versions) and the min steiner tree problem by moderately exponential algorithms. The general goal of the issue of moderately exponential approximation is to catch up on polynomial inapproximability by designing superpolynomial algorithms achieving approximation ratios unachievable in polynomial time. Worst-case running times of such algorithms are significantly smaller than those needed for optimal solutions of the problems handled.  相似文献   

13.
In this paper we investigate several solution algorithms for the convex fea- sibility problem(CFP)and the best approximation problem(BAP)respectively.The algorithms analyzed are already known before,but by adequately reformulating the CFP or the BAP we naturally deduce the general projection method for the CFP from well-known steepest decent method for unconstrained optimization and we also give a natural strategy of updating weight parameters.In the linear case we show the connec- tion of the two projection algorithms for the CFP and the BAP respectively.In addition, we establish the convergence of a method for the BAP under milder assumptions in the linear case.We also show by examples a Bauschke's conjecture is only partially correct.  相似文献   

14.
In this paper, we present new approximation results for the offline problem of single machine scheduling with sequence-independent set-ups and item availability, where the jobs to be scheduled are independent (i.e., have no precedence constraints) and have a common release time.We present polynomial-time approximation algorithms for two versions of this problem. In the first version, the input includes a weight for each job, and the goal is to minimize the total weighted completion time. On any input, our algorithm produces a schedule whose total weighted completion time is within a factor 2 of optimal for that input.In the second version, the input includes a due date for each job, and the goal is to minimize the maximum lateness of any job. On any input, our algorithm produces a schedule with the following performance guarantee: the maximum lateness of a job is at most the maximum lateness of the optimal schedule on a machine that runs at half the speed of our machine.  相似文献   

15.
In this paper we analyze several approaches to the Maximum Independent Set (MIS) problem in hypergraphs with degree bounded by a parameter Δ. Since independent sets in hypergraphs can be strong and weak, we denote by MIS (MSIS) the problem of finding a maximum weak (strong) independent set in hypergraphs, respectively. We propose a general technique that reduces the worst case analysis of certain algorithms on hypergraphs to their analysis on ordinary graphs. This technique allows us to show that the greedy algorithm for MIS that corresponds to the classical greedy set cover algorithm has a performance ratio of (Δ+1)/2. It also allows us to apply results on local search algorithms on graphs to obtain a (Δ+1)/2 approximation for the weighted MIS and (Δ+3)/5−? approximation for the unweighted case. We improve the bound in the weighted case to ⌈(Δ+1)/3⌉ using a simple partitioning algorithm. We also consider another natural greedy algorithm for MIS that adds vertices of minimum degree and achieves only a ratio of Δ−1, significantly worse than on ordinary graphs. For MSIS, we give two variations of the basic greedy algorithm and describe a family of hypergraphs where both algorithms approach the bound of Δ.  相似文献   

16.
《Optimization》2012,61(5):705-714
A NP-hard problem (P) of mixed-discrete linear programming is considered which consists in the minimization of a linear objective function subject to a special non-connected subset of an unbounded polymatroid. For this problem we describe three polynomial approximate algorithms including a greedy algorithm and a fully polynomial approximation scheme solving a special subproblem of (P).  相似文献   

17.
We study the problem of maximizing constrained non-monotone submodular functions and provide approximation algorithms that improve existing algorithms in terms of either the approximation factor or simplicity. Different constraints that we study are exact cardinality and multiple knapsack constraints for which we achieve (0.25−?)-factor algorithms.We also show, as our main contribution, how to use the continuous greedy process for non-monotone functions and, as a result, obtain a 0.13-factor approximation algorithm for maximization over any solvable down-monotone polytope.  相似文献   

18.
A comparison of sequential Delaunay triangulation algorithms   总被引:5,自引:0,他引:5  
This paper presents an experimental comparison of a number of different algorithms for computing the Delaunay triangulation. The algorithms examined are: Dwyer's divide and conquer algorithm, Fortune's sweepline algorithm, several versions of the incremental algorithm (including one by Ohya, Iri and Murota, a new bucketing-based algorithm described in this paper, and Devillers's version of a Delaunay-tree based algorithm that appears in LEDA), an algorithm that incrementally adds a correct Delaunay triangle adjacent to a current triangle in a manner similar to gift wrapping algorithms for convex hulls, and Barber's convex hull based algorithm.

Most of the algorithms examined are designed for good performance on uniformly distributed sites. However, we also test implementations of these algorithms on a number of non-uniform distributions. The experiments go beyond measuring total running time, which tends to be machine-dependent. We also analyze the major high-level primitives that algorithms use and do an experimental analysis of how often implementations of these algorithms perform each operation.  相似文献   


19.
Algorithms and implementations for computing the sign function of a triangular matrix are fundamental building blocks for computing the sign of arbitrary square real or complex matrices. We present novel recursive and cache‐efficient algorithms that are based on Higham's stabilized specialization of Parlett's substitution algorithm for computing the sign of a triangular matrix. We show that the new recursive algorithms are asymptotically optimal in terms of the number of cache misses that they generate. One algorithm that we present performs more arithmetic than the nonrecursive version, but this allows it to benefit from calling highly optimized matrix multiplication routines; the other performs the same number of operations as the nonrecursive version, suing custom computational kernels instead. We present implementations of both, as well as a cache‐efficient implementation of a block version of Parlett's algorithm. Our experiments demonstrate that the blocked and recursive versions are much faster than the previous algorithms and that the inertia strongly influences their relative performance, as predicted by our analysis.  相似文献   

20.
We consider the linear multistep algorithms for first order ODEs and examine the problem of how the λ-frequencies should be tuned in order to obtain the maximal benefit from the exponential fitting versions of such algorithms. We find out that the key of the answer consists in analysing the behaviour of the error. On further investigating the simple case of two-step bdf algorithms we produce formulae for the optimal λ's and show that, if the optimal λ's are used, the order of the method is increased by one unit. The reported numerical illustrations suggest that further investigations along these lines deserve a real attention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号