首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper a local integral simplex algorithm will be described which, starting with the initial tableau of a set partitioning problem, makes pivots using the pivot on one rule until no more such pivots are possible because a local optimum has been found. If the local optimum is also a global optimum the process stops. Otherwise, a global integral simplex algorithm creates and solves the problems in a search tree consisting of a polynomial number of subproblems, subproblems of subproblems, etc. The solution to at least one of these subproblems is guaranteed to be an optimal solution to the original problem. If that solution has a bounded objective then it is an optimal set partitioning solution of the original problem, but if it has an unbounded objective then the original problem has no feasible solution. It will be shown that the total number of pivots required for the global integral simplex method to solve a set partitioning problem having m rows, where m is an arbitrary but fixed positive integer, is bounded by a polynomial function of n.A method for programming the algorithms in this paper to run on parallel computers is discussed briefly.  相似文献   

2.
On the computational behavior of a polynomial-time network flow algorithm   总被引:1,自引:0,他引:1  
A variation on the Edmonds-Karp scaling approach to the minimum cost network flow problem is examined. This algorithm, which scales costs rather than right-hand sides, also runs in polynomial time. Large-scale computational experiments indicate that the computational behavior of such scaling algorithms may be much better than had been presumed. Within several distributions of square, dense, capacitated transportation problems, a cost scaling code, SCALE, exhibits linear growth in average execution time with the number of edges, while two network simplex codes, RNET and GNET, exhibit greater than linear growth.Our experiments reveal that median and mean execution times are predictable with surprising accuracy for all of the three codes and all three distributions from which test problems were generated. Moreover, for fixed problem size, individual execution times appear to behave as though they are approximately lognormally distributed with constant variance. The experiments also reveal sensitivity of the parameters in the models, and in the models themselves, to variations in the distribution of problems. This argues for caution in the interpretation of such computational studies beyond the realm in which the computations were performed.This work has been supported in part by NSF grants ENG-7910807, ECS-8313853, DMS-8706133, and DDM-8813054, and by AFOSR, NSF, and ONR under NSF grant DMS-8920550 to Cornell University, and by a Sloan Foundation research fellowship held by the first author.  相似文献   

3.
In this paper we show that the complexity of the simplex method for the linear fractional assignment problem (LFAP) is strongly polynomial. Although LFAP can be solved in polynomial time using various algorithms such as Newton’s method or binary search, no polynomial time bound for the simplex method for LFAP is known.  相似文献   

4.
This paper reports on an experimental study of the distribution of the length of simplex paths for the Optimal Assignment Problem. We study the distribution of the pivot counts for a version of the simplex method that with essentially equal probabilities introduces any variable with negative reduced cost into the basis. In this situation the distribution of the pivot counts turns out to be normally distributed and independent of the actual cost coefficients, provided these are sufficiently spread out. Further, the mean and standard deviation grow only moderately with the size of the problem, namely asd 1.8, andd 1.5 respectively for ad×d problem, implying in particular that the pivot counts concentrate around the mean with growingd. The usual simplex method on the other hand gives a growth ofd 1.6. Hence a large part of the favourable polynomial growth experienced on practical problems may be attributed to the fact that the simplex paths are rather short on the average, at least for assignment problems.  相似文献   

5.
We study the behavior of some polynomial interior-point algorithms for solving random linear programming (LP) problems. We show that the average number of iterations of these algorithms, coupled with a finite termination technique, is bounded above byO(n 1.5). The random LP problem is Todd’s probabilistic model with the standard Gauss distribution.  相似文献   

6.
We describe a new dual algorithm for the minimum cost flow problem. It can be regarded as a variation of the best known strongly polynomial minimum cost flow algorithm, due to Orlin. Indeed we obtain the same running time of O(m log m(m+n log n)), where n and m denote the number of vertices and the number of edges. However, in contrast to Orlin's algorithm we work directly with the capacitated network (rather than transforming it to a transshipment problem). Thus our algorithm is applicable to more general problems (like submodular flow) and is likely to be more efficient in practice.  Our algorithm can be interpreted as a cut cancelling algorithm, improving the best known strongly polynomial bound for this important class of algorithms by a factor of m. On the other hand, our algorithm can be considered as a variant of the dual network simplex algorithm. Although dual network simplex algorithms are reportedly quite efficient in practice, the best worst-case running time known so far exceeds the running time of our algorithm by a factor of n.  相似文献   

7.
We consider a class of convex programming problems whose objective function is given as a linear function plus a convex function whose arguments are linear functions of the decision variables and whose feasible region is a polytope. We show that there exists an optimal solution to this class of problems on a face of the constraint polytope of dimension not more than the number of arguments of the convex function. Based on this result, we develop a method to solve this problem that is inspired by the simplex method for linear programming. It is shown that this method terminates in a finite number of iterations in the special case that the convex function has only a single argument. We then use this insight to develop a second algorithm that solves the problem in a finite number of iterations for an arbitrary number of arguments in the convex function. A computational study illustrates the efficiency of the algorithm and suggests that the average-case performance of these algorithms is a polynomial of low order in the number of decision variables. The work of T. C. Sharkey was supported by a National Science Foundation Graduate Research Fellowship. The work of H. E. Romeijn was supported by the National Science Foundation under Grant No. DMI-0355533.  相似文献   

8.
We present two randomized entropy-based algorithms for approximating quite general #P-complete counting problems, like the number of Hamiltonian cycles in a graph, the permanent, the number of self-avoiding walks and the satisfiability problem. In our algorithms we first cast the underlying counting problem into an associate rare-event probability estimation, and then apply dynamic importance sampling (IS) to estimate efficiently the desired counting quantity. We construct the IS distribution by using two different approaches: one based on the cross-entropy (CE) method and the other one on the stochastic version of the well known minimum entropy (MinxEnt) method. We also establish convergence of our algorithms and confidence intervals for some special settings and present supportive numerical results, which strongly suggest that both ones (CE and MinxEnt) have polynomial running time in the size of the problem.  相似文献   

9.
This paper investigates the effectiveness of using finite improvement algorithms for solving decision, search, and optimization problems. Finite improvement algorithms operate in a finite number of iterations, each taking a polynomial amount of work, where strict improvement is required from iteration to iteration. The hardware, software, and way of measuring complexity found in the polynomial setting are modified to identify the concept of repetition and define the new classes of decision problems,FI andNFI. A firstNFI-complete problem is given using the idea ofFI-transformations. Results relating these new classes toP, NP, andNP-complete are given. It is shown that if an optimization problem in a new classPGS isNP-hard, thenNP=co-NP. TwoPGS problems are given for which no polynomial algorithms are known to exist.  相似文献   

10.
Given a set of leaf-labelled trees with identical leaf sets, the MAST problem, respectively MCT problem, consists of finding a largest subset of leaves such that all input trees restricted to these leaves are isomorphic, respectively compatible. In this paper, we propose extensions of these problems to the context of supertree inference, where input trees have non-identical leaf sets. This situation is of particular interest in phylogenetics. The resulting problems are called SMAST and SMCT.A sufficient condition is given that identifies cases where these problems can be solved by resorting to MAST and MCT as subproblems. This condition is met, for instance, when only two input trees are considered. Then we give algorithms for SMAST and SMCT that benefit from the link with the subtree problems. These algorithms run in time linear to the time needed to solve MAST, respectively MCT, on an instance of the same or smaller size.It is shown that arbitrary instances of SMAST and SMCT can be turned in polynomial time into instances composed of trees with a bounded number of leaves.SMAST is shown to be W[2]-hard when the considered parameter is the number of input leaves that have to be removed to obtain the agreement of the input trees. A similar result holds for SMCT. Moreover, the corresponding optimization problems, that is the complements of SMAST and SMCT, cannot be approximated in polynomial time within any constant factor, unless P=NP. These results also hold when the input trees have a bounded number of leaves.The presented results apply to both collections of rooted and unrooted trees.  相似文献   

11.
提出需要安装时间的多功能机排序问题,一般情况下,这是NP-困难的;主要研究只有两台机器时一些特殊情况下的计算复杂性.根据加工集合为机器全集的工件组数的不同,分别给出多项式时间算法和分枝定界算法.对各工件组的工件数和加工时间都相等的情况,给出一个多项式时间的最优算法-奇偶算法,从而证明此问题是多项式时间可解的.  相似文献   

12.
13.
Minimizing average completion time in the presence of release dates   总被引:8,自引:0,他引:8  
A natural and basic problem in scheduling theory is to provide good average quality of service to a stream of jobs that arrive over time. In this paper we consider the problem of schedulingn jobs that are released over time in order to minimize the average completion time of the set of jobs. In contrast to the problem of minimizing average completion time when all jobs are available at time 0, all the problems that we consider are NP-hard, and essentially nothing was known about constructing good approximations in polynomial time. We give the first constant-factor approximation algorithms for several variants of the single and parallel machine models. Many of the algorithms are based on interesting algorithmic and structural relationships between preemptive and nonpreemptive schedules and linear programming relaxations of both. Many of the algorithms generalize to the minimization of averageweighted completion time as well. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.This work was performed under US Department of Energy contract number DE-AC04-76AL85000.Research partly supported by NSF Award CCR-9308701, a Walter Burke Research Initiation Award and a Dartmouth College Research Initiation Award.Research partially supported by NSF Research Initiation Award CCR-9211494 and a grant from the New York State Science and Technology Foundation, through its Center for Advanced Technology in Telecommunications.  相似文献   

14.
This work considers the global optimization of general nonconvex nonlinear and mixed-integer nonlinear programming problems with underlying polynomial substructures. We incorporate linear cutting planes inspired by reformulation-linearization techniques to produce tight subproblem formulations that exploit these underlying structures. These cutting plane strategies simultaneously convexify linear and nonlinear terms from multiple constraints and are highly effective at tightening standard linear programming relaxations generated by sequential factorable programming techniques. Because the number of available cutting planes increases exponentially with the number of variables, we implement cut filtering and selection strategies to prevent an exponential increase in relaxation size. We introduce algorithms for polynomial substructure detection, cutting plane identification, cut filtering, and cut selection and embed the proposed implementation in BARON at every node in the branch-and-bound tree. A computational study including randomly generated problems of varying size and complexity demonstrates that the exploitation of underlying polynomial substructures significantly reduces computational time, branch-and-bound tree size, and required memory.  相似文献   

15.
We prove a new complexity bound, polynomial on the average, for the problem of finding an approximate zero of systems of polynomial equations. The average number of Newton steps required by this method is almost linear in the size of the input (dense encoding). We show that the method can also be used to approximate several or all the solutions of non-degenerate systems, and prove that this last task can be done in running time which is linear in the Bézout number of the system and polynomial in the size of the input, on the average.  相似文献   

16.
We consider the problem of obtaining integer solutions to a minmax linear programming problem. Although this general problem is NP-complete, it is shown that a restricted version of this problem can be solved in polynomial time. For this restricted class of problems two polynomial time algorithms are suggested, one of which is strongly polynomial whenever its continuous analogue and an associated linear programming problem can be solved by a strongly polynomial algorithm. Our algorithms can also be used to obtain integer solutions for the minmax transportation problem with an inequality budget constraint. The equality constrained version of this problem is shown to be NP-complete. We also provide some new insights into the solution procedures for the continuous minmax linear programming problem.  相似文献   

17.
We consider general properties of isomorphic scheduling problems that constitute a new class of pairs of mutually related scheduling problems. Any such a pair is composed of a scheduling problem with fixed job processing times and its time-dependent counterpart with processing times that are proportional-linear functions of the job starting times. In order to introduce the class formally, first we formulate a generic scheduling problem with fixed job processing times and define isomorphic problems by a one-to-one transformation of instances of the generic problem into instances of time-dependent scheduling problems with proportional-linear job processing times. Next, we prove basic properties of isomorphic scheduling problems and show how to convert polynomial algorithms for scheduling problems with fixed job processing times into polynomial algorithms for proportional-linear counterparts of the original problems. Finally, we show how are related approximation algorithms for isomorphic problems. Applying the results, we establish new worst-case results for time-dependent parallel-machine scheduling problems and prove that many single- and dedicated-machine time-dependent scheduling problems with proportional-linear job processing times are polynomially solvable.  相似文献   

18.
Many dynamic programming algorithms for discrete optimization problems are pure in that they only use min/max and addition operations in their recursions. Some of them, in particular those for various shortest path problems, are even incremental in that one of the inputs to the addition operations is a variable. We present an explicit optimization problem such that it can be solved by a pure DP algorithm using a polynomial number of operations, but any incremental DP algorithm for this problem requires a super-polynomial number of operations.  相似文献   

19.
Answering a question of Haugland, we show that the pooling problem with one pool and a bounded number of inputs can be solved in polynomial time by solving a polynomial number of linear programs of polynomial size. We also give an overview of known complexity results and remaining open problems to further characterize the border between (strongly) NP-hard and polynomially solvable cases of the pooling problem.  相似文献   

20.
We study approximation of some well-known network design problems such as the traveling salesman problem (for both minimization and maximization versions) and the min steiner tree problem by moderately exponential algorithms. The general goal of the issue of moderately exponential approximation is to catch up on polynomial inapproximability by designing superpolynomial algorithms achieving approximation ratios unachievable in polynomial time. Worst-case running times of such algorithms are significantly smaller than those needed for optimal solutions of the problems handled.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号