首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
The revised simplex method is often the method of choice when solving large scale sparse linear programming problems, particularly when a family of closely-related problems is to be solved. Each iteration of the revised simplex method requires the solution of two linear systems and a matrix vector product. For a significant number of practical problems the result of one or more of these operations is usually sparse, a property we call hyper-sparsity. Analysis of the commonly-used techniques for implementing each step of the revised simplex method shows them to be inefficient when hyper-sparsity is present. Techniques to exploit hyper-sparsity are developed and their performance is compared with the standard techniques. For the subset of our test problems that exhibits hyper-sparsity, the average speedup in solution time is 5.2 when these techniques are used. For this problem set our implementation of the revised simplex method which exploits hyper-sparsity is shown to be competitive with the leading commercial solver and significantly faster than the leading public-domain solver.  相似文献   

2.
The Kelley cutting plane method is one of the methods commonly used to optimize the dual function in the Lagrangian relaxation scheme. Usually the Kelley cutting plane method uses the simplex method as the optimization engine. It is well known that the simplex method leaves the current vertex, follows an ascending edge and stops at the nearest vertex. What would happen if one would continue the line search up to the best point instead? As a possible answer, we propose the face simplex method, which freely explores the polyhedral surface by following the Rosen’s gradient projection combined with a global line search on the whole surface. Furthermore, to avoid the zig-zagging of the gradient projection, we propose a conjugate gradient version of the face simplex method. For our preliminary numerical tests we have implemented this method in Matlab. This implementation clearly outperforms basic Matlab implementations of the simplex method. In the case of state-of-the-art simplex implementations in C or similar, our Matlab implementation is only competitive for the case of many cutting planes.  相似文献   

3.
There are well-known examples of cycling in the linear programming simplex method having basis size two and requiring only six pivots. We prove that any example having basis size two for the network simplex method requires at least ten pivots. We also present an example that achieves this lower bound. In addition, we show that an attractive variant of Cunningham's noncyling method does admit cycling.  相似文献   

4.
5.
6.
This paper evaluates an algorithm for solving network flow optimization problems with quadratic cost functions. Strategies for fast implementation are discussed and the results of extensive numerical tests are given. The performance of the algorithm measured by CPU time is compared with that of the convex simplex method specialized for quadratic network programming. Performance of the two methods is analysed with respect to network size and density, and other parameters of interest. The algorithm is shown to perform significantly better on the majority of problems. We also show how the algorithm may be used to solve non-linear convex network optimization problems by the use of sequential quadratic programming.  相似文献   

7.
本文给出了一种新的原对偶单纯形法,并通过它分析了隐藏在经典单纯形法中的对偶信息.我们重新评价经典单纯形法并详细讨论了它与现代单纯形法之间的联系.两个修改版本一并给出.新算法具有计算量小和实施简单等特点,计算效果也不错.初步数值实验表明现代单纯形法比经典方法具有明显的优越性.  相似文献   

8.
We propose two approaches to solve large-scale compressed sensing problems. The first approach uses the parametric simplex method to recover very sparse signals by taking a small number of simplex pivots, while the second approach reformulates the problem using Kronecker products to achieve faster computation via a sparser problem formulation. In particular, we focus on the computational aspects of these methods in compressed sensing. For the first approach, if the true signal is very sparse and we initialize our solution to be the zero vector, then a customized parametric simplex method usually takes a small number of iterations to converge. Our numerical studies show that this approach is 10 times faster than state-of-the-art methods for recovering very sparse signals. The second approach can be used when the sensing matrix is the Kronecker product of two smaller matrices. We show that the best-known sufficient condition for the Kronecker compressed sensing (KCS) strategy to obtain a perfect recovery is more restrictive than the corresponding condition if using the first approach. However, KCS can be formulated as a linear program with a very sparse constraint matrix, whereas the first approach involves a completely dense constraint matrix. Hence, algorithms that benefit from sparse problem representation, such as interior point methods (IPMs), are expected to have computational advantages for the KCS problem. We numerically demonstrate that KCS combined with IPMs is up to 10 times faster than vanilla IPMs and state-of-the-art methods such as \(\ell _1\_\ell _s\) and Mirror Prox regardless of the sparsity level or problem size.  相似文献   

9.
We address the two-commodity minimum cost flow problem considering two objectives. We show that the biobjective undirected two-commodity minimum cost flow problem can be split into two standard biobjective minimum cost flow problems using the change of variables approach. This technique allows us to develop a method that finds all the efficient extreme points in the objective space for the two-commodity problem solving two biobjective minimum cost flow problems. In other words, we generalize the Hu's theorem for the biobjective undirected two-commodity minimum cost flow problem. In addition, we develop a parametric network simplex method to solve the biobjective problem.  相似文献   

10.
For general sparse linear programs two of the most efficient implementations of the LU factorization with Bartels—Golub updating are due to Reid and Saunders. This paper presents an alternative approach which achieves fast execution times for degenerate simplex method iterations, especially when used with multiple pricing. The method should have wide applicability since the simplex method performs a high proportion of degenerate iterations on most practical problems. A key feature of Saunders' method is combined with the updating strategy of Reid so as to make the scheme suitable for implementation out of core. Its efficiency is confirmed by experimental results.  相似文献   

11.
The simplex method is frequently the most efficient method of solving linear programming (LP) problems. This paper reviews previous attempts to parallelise the simplex method in relation to efficient serial simplex techniques and the nature of practical LP problems. For the major challenge of solving general large sparse LP problems, there has been no parallelisation of the simplex method that offers significantly improved performance over a good serial implementation. However, there has been some success in developing parallel solvers for LPs that are dense or have particular structural properties. As an outcome of the review, this paper identifies scope for future work towards the goal of developing parallel implementations of the simplex method that are of practical value.  相似文献   

12.
Thedynamic tree is an abstract data type that allows the maintenance of a collection of trees subject to joining by adding edges (linking) and splitting by deleting edges (cutting), while at the same time allowing reporting of certain combinations of vertex or edge values. For many applications of dynamic trees, values must be combined along paths. For other applications, values must be combined over entire trees. For the latter situation, an idea used originally in parallel graph algorithms, to represent trees by Euler tours, leads to a simple implementation with a time of O(logn) per tree operation, wheren is the number of tree vertices. We apply this representation to the implementation of two versions of the network simplex algorithm, resulting in a time of O(logn) per pivot, wheren is the number of vertices in the problem network. Research at Princeton University partially supported by the National Science Foundation, Grant No. CCR-8920505, and the Office of Naval Research, Contract No. N0014-91-J-1463. Work during a visit to M.I.T. partially supported by ARPA Contract No. 14-95-1-1246.  相似文献   

13.
In this paper, we investigate how an embedded pure network structure arising in many linear programming (LP) problems can be exploited to create improved sparse simplex solution algorithms. The original coefficient matrix is partitioned into network and non-network parts. For this partitioning, a decomposition technique can be applied. The embedded network flow problem can be solved to optimality using a fast network flow algorithm. We investigate two alternative decompositions namely, Lagrangean and Benders. In the Lagrangean approach, the optimal solution of a network flow problem and in Benders the combined solution of the master and the subproblem are used to compute good (near optimal and near feasible) solutions for a given LP problem. In both cases, we terminate the decomposition algorithms after a preset number of passes and active variables identified by this procedure are then used to create an advanced basis for the original LP problem. We present comparisons with unit basis and a well established crash procedure. We find that the computational results of applying these techniques to a selection of Netlib models are promising enough to encourage further research in this area.  相似文献   

14.
A characteristic feature of the primal network simplex algorithm (NSA) is that it usually makes a large number of degenerate iterations. Though cycling and even stalling can be avoided by recently introduced pivot rules for NSA, the practical efficiency of these rules is not known yet. For the case when the simplex algorithm is used to solve the continuous linear programming (LP) problem there exists a practical anti-cycling procedure that proved to be efficient. It is based on an expanding relaxation of the individual bound on the variables. In this paper we discuss the adaptation of this method to NSA, taking advantage of the special integer nature of network problems. We also give an account of our experience with these ideas as they are experimentally implemented in the MINET network LP solver. Reductions of CPU time have been achieved on a smaller set of specially structured real-life problems.This research was supported in part by Hungarian Research Fund OTKA 2587, and by DAAD 314 108 060 0 while the author was at Universität Heidelberg, Germany, October, 1990.  相似文献   

15.
This paper describes the experimental results of testing a large-scale program for solving minimum-cost network flow problems. With this program, general structure transshipment problems with over ten thousand nodes and thirty thousand arcs have been easily solved without resorting to auxiliary storage. The algorithm is a variant of the primal revised simplex method; the computer code is called LPNET illustrating the close connection between linear programming and network graphs. This approach substantially improves computer processing timeand core storage, especially for relatively large network problems. The results of these experiments are provided. It is emphasized that an organized experimental design and a detailed series of empirical tests are crucial for an efficient implementation.Research supported in part by TRW Systems Group and Harvard Business School.  相似文献   

16.
Implementing the Nelder-Mead simplex algorithm with?adaptive parameters   总被引:1,自引:0,他引:1  
In this paper, we first prove that the expansion and contraction steps of the Nelder-Mead simplex algorithm possess a descent property when the objective function is uniformly convex. This property provides some new insights on why the standard Nelder-Mead algorithm becomes inefficient in high dimensions. We then propose an implementation of the Nelder-Mead method in which the expansion, contraction, and shrink parameters depend on the dimension of the optimization problem. Our numerical experiments show that the new implementation outperforms the standard Nelder-Mead method for high dimensional problems.  相似文献   

17.
Instead of trying to recognize and avoid degenerate steps in the simplex method (as some variants do), we have developed a new Phase I algorithm that is impervious to degeneracy. The new algorithm solves a non-negative least-squares problem in order to find a Phase I solution. In each iteration, a simple two-variable least-squares subproblem is used to select an incoming column to augment a set of independent columns (called basic) to get a strictly better fit to the right-hand side. Although this is analogous in many ways to the simplex method, it can be proved that strict improvement is attained at each iteration, even in the presence of degeneracy. Thus cycling cannot occur, and convergence is guaranteed. This algorithm is closely related to a number of existing algorithms proposed for non-negative least-squares and quadratic programs.When used on the 30 smallest NETLIB linear programming test problems, the computational results for the new Phase I algorithm were almost 3.5 times faster than a particular implementation of the simplex method; on some problems, it was over 10 times faster. Best results were generally seen on the more degenerate problems.  相似文献   

18.
We introduce a new network simplex pivot rule for the shortest path simplex algorithm. This new pivot rule chooses a subset of non-basic arcs to simultaneously enter into the basis. We call this operation a multiple pivot. We show that a shortest path simplex algorithm with this pivot rule performs O(n) multiple pivots and runs in O(nm) time. Our pivot rule is based on the new concept of a pseudo permanently labeled node, and it can be adapted to design a new label-correcting algorithm that runs in O(nm). Moreover, this concept lets us introduce new rules to identify negative cycles. Finally, we compare the network simplex algorithm with multiple pivots with other previously proposed efficient network simplex algorithm in a computational experiment.  相似文献   

19.
We present a new network simplex pivot selection rule, which we call theminimum ratio pivot rule, and analyze the worst-case complexity of the resulting network simplex algorithm. We consider networks withn nodes,m arcs, integral arc capacities and integral supplies/demands of nodes. We define a {0, 1}-valued penalty for each arc of the network. The minimum ratio pivot rule is to select that eligible arc as the entering arc whose addition to the basis creates a cycle with the minimum cost-to-penalty ratio. We show that the so-defined primal network simplex algorithm solves minimum cost flow problem within O() pivots and in O(Δ(m + n logn)) time, whereΔ is any upper bound on the sum of all arc flows in every feasible flow. For assignment and shortest path problems, our algorithm runs in O(n 2) pivots and O(nm +n 2 logn) time.  相似文献   

20.
《Optimization》2012,61(8):1283-1295
In this article we present the fundamental idea, concepts and theorems of a basic line search algorithm for solving linear programming problems which can be regarded as an extension of the simplex method. However, unlike the iteration of the simplex method from a basic point to an improved adjacent basic point via pivot operation, the basic line search algorithm, also by pivot operation, moves from a basic line which contains two basic feasible points to an improved basic line which also contains two basic feasible points whose objective values are no worse than that of the two basic feasible points on the previous basic line. The basic line search algorithm may skip some adjacent vertices so that it converges to an optimal solution faster than the simplex method. For example, for a 2-dimensional problem, the basic line search algorithm can find an optimal solution with only one iteration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号