首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The dual simplex algorithm has become a strong contender in solving large scale LP problems. One key problem of any dual simplex algorithm is to obtain a dual feasible basis as a starting point. We give an overview of methods which have been proposed in the literature and present new stable and efficient ways to combine them within a state-of-the-art optimization system for solving real world linear and mixed integer programs. Furthermore, we address implementation aspects and the connection between dual feasibility and LP-preprocessing. Computational results are given for a large set of large scale LP problems, which show our dual simplex implementation to be superior to the best existing research and open-source codes and competitive to the leading commercial code on many of our most difficult problem instances.  相似文献   

2.
We present a parallelization of the revised simplex method for large extensive forms of two-stage stochastic linear programming (LP) problems. These problems have been considered too large to solve with the simplex method; instead, decomposition approaches based on Benders decomposition or, more recently, interior-point methods are generally used. However, these approaches do not provide optimal basic solutions, which allow for efficient hot-starts (e.g., in a branch-and-bound context) and can provide important sensitivity information. Our approach exploits the dual block-angular structure of these problems inside the linear algebra of the revised simplex method in a manner suitable for high-performance distributed-memory clusters or supercomputers. While this paper focuses on stochastic LPs, the work is applicable to all problems with a dual block-angular structure. Our implementation is competitive in serial with highly efficient sparsity-exploiting simplex codes and achieves significant relative speed-ups when run in parallel. Additionally, very large problems with hundreds of millions of variables have been successfully solved to optimality. This is the largest-scale parallel sparsity-exploiting revised simplex implementation that has been developed to date and the first truly distributed solver. It is built on novel analysis of the linear algebra for dual block-angular LP problems when solved by using the revised simplex method and a novel parallel scheme for applying product-form updates.  相似文献   

3.
This paper introduces the design and implementation of two parallel dual simplex solvers for general large scale sparse linear programming problems. One approach, called PAMI, extends a relatively unknown pivoting strategy called suboptimization and exploits parallelism across multiple iterations. The other, called SIP, exploits purely single iteration parallelism by overlapping computational components when possible. Computational results show that the performance of PAMI is superior to that of the leading open-source simplex solver, and that SIP complements PAMI in achieving speedup when PAMI results in slowdown. One of the authors has implemented the techniques underlying PAMI within the FICO Xpress simplex solver and this paper presents computational results demonstrating their value. In developing the first parallel revised simplex solver of general utility, this work represents a significant achievement in computational optimization.  相似文献   

4.
The revised simplex method is often the method of choice when solving large scale sparse linear programming problems, particularly when a family of closely-related problems is to be solved. Each iteration of the revised simplex method requires the solution of two linear systems and a matrix vector product. For a significant number of practical problems the result of one or more of these operations is usually sparse, a property we call hyper-sparsity. Analysis of the commonly-used techniques for implementing each step of the revised simplex method shows them to be inefficient when hyper-sparsity is present. Techniques to exploit hyper-sparsity are developed and their performance is compared with the standard techniques. For the subset of our test problems that exhibits hyper-sparsity, the average speedup in solution time is 5.2 when these techniques are used. For this problem set our implementation of the revised simplex method which exploits hyper-sparsity is shown to be competitive with the leading commercial solver and significantly faster than the leading public-domain solver.  相似文献   

5.
Primal–dual interior point methods and the HKM method in particular have been implemented in a number of software packages for semidefinite programming. These methods have performed well in practice on small to medium sized SDPs. However, primal–dual codes have had some trouble in solving larger problems because of the storage requirements and required computational effort. In this paper we describe a parallel implementation of the primal–dual method on a shared memory system. Computational results are presented, including the solution of some large scale problems with over 50,000 constraints.  相似文献   

6.
The simplex method is frequently the most efficient method of solving linear programming (LP) problems. This paper reviews previous attempts to parallelise the simplex method in relation to efficient serial simplex techniques and the nature of practical LP problems. For the major challenge of solving general large sparse LP problems, there has been no parallelisation of the simplex method that offers significantly improved performance over a good serial implementation. However, there has been some success in developing parallel solvers for LPs that are dense or have particular structural properties. As an outcome of the review, this paper identifies scope for future work towards the goal of developing parallel implementations of the simplex method that are of practical value.  相似文献   

7.
This paper presents an application of parallel computing techniques to the solution of an important class of planning problems known as generalized networks. Three parallel primal simplex variants for solving generalized network problems are presented. Data structures used in a sequential generalized network code are briefly discussed and their extension to a parallel implementation of one of the primal simplex variants is given. Computational testing of the sequential and parallel codes, both written in Fortran, was done on the CRYSTAL multicomputer at the University of Wisconsin, and the computational results are presented. Maximum efficiency occurred for multiperiod generalized network problems where a speedup approximately linear in the number of processors was achieved.This research was supported in part by NSF grants DCR-8503148 and CCR-8709952 and by AFOSR grant AFOSR-86-0194.  相似文献   

8.
A cutting plane algorithm for a clustering problem   总被引:2,自引:0,他引:2  
In this paper we consider a clustering problem that arises in qualitative data analysis. This problem can be transformed to a combinatorial optimization problem, the clique partitioning problem. We have studied the latter problem from a polyhedral point of view and determined large classes of facets of the associated polytope. These theoretical results are utilized in this paper. We describe a cutting plane algorithm that is based on the simplex method and uses exact and heuristic separation routines for some of the classes of facets mentioned before. We discuss some details of the implementation of our code and present our computational results. We mention applications from, e.g., zoology, economics, and the political sciences.  相似文献   

9.
Conditional Value-at-Risk (CVaR) is a portfolio evaluation function having appealing features such as sub-additivity and convexity. Although the CVaR function is nondifferentiable, scenario-based CVaR minimization problems can be reformulated as linear programs (LPs) that afford solutions via widely-used commercial softwares. However, finding solutions through LP formulations for problems having many financial instruments and a large number of price scenarios can be time-consuming as the dimension of the problem greatly increases. In this paper, we propose a two-phase approach that is suitable for solving CVaR minimization problems having a large number of price scenarios. In the first phase, conventional differentiable optimization techniques are used while circumventing nondifferentiable points, and in the second phase, we employ a theoretically convergent, variable target value nondifferentiable optimization technique. The resultant two-phase procedure guarantees infinite convergence to optimality. As an optional third phase, we additionally perform a switchover to a simplex solver starting with a crash basis obtained from the second phase when finite convergence to an exact optimum is desired. This three phase procedure substantially reduces the effort required in comparison with the direct use of a commercial stand-alone simplex solver (CPLEX 9.0). Moreover, the two-phase method provides highly-accurate near-optimal solutions with a significantly improved performance over the interior point barrier implementation of CPLEX 9.0 as well, especially when the number of scenarios is large. We also provide some benchmarking results on using an alternative popular proximal bundle nondifferentiable optimization technique.  相似文献   

10.
The simplex algorithm is still the best known and most frequently used way to solve LP problems. Khachian has suggested a method to solve these problems in polynomial time. The average behaviour of his method, however, is still inferior to the modern simplex based LP codes. A new gradient based approach which also has polynomial worst-case behaviour has been suggested by Karmarkar. This method was modified, programmed and compared with other available LP codes. It is shown that the numerical efficiency of Karmarkar's method compares favourably with other LP codes, particularly for problems with high numbers of variables and few constraints.  相似文献   

11.
This paper deals with the solution of nonlinear programming problems arising from elliptic control problems by an interior point scheme. At each step of the scheme, we have to solve a large scale symmetric and indefinite system; inner iterative solvers, with an adaptive stopping rule, can be used in order to avoid unnecessary inner iterations, especially when the current outer iterate is far from the solution. In this work, we analyse the method of multipliers and the preconditioned conjugate gradient method as inner solvers for interior point schemes. We discuss the convergence of the whole approach, the implementation details and report the results of numerical experimentation on a set of large scale test problems arising from the discretization of elliptic control problems. A comparison with other interior point codes is also reported. This research was supported by the Italian Ministry for Education, University and Research (MIUR) projects: FIRB Project: “Parallel Nonlinear Numerical Optimization PN 2 O” (grant n. RBAU01JYPN, ) and COFIN/PRIN04 Project “Numerical Methods and Mathematical Software for Applications” (grant n. 2004012559, ).  相似文献   

12.
We propose techniques for the solution of the LP relaxation and the Lagrangean dual in combinatorial optimization and nonlinear programming problems. Our techniques find the optimal solution value and the optimal dual multipliers of the LP relaxation and the Lagrangean dual in polynomial time using as a subroutine either the Ellipsoid algorithm or the recent algorithm of Vaidya. Moreover, in problems of a certain structure our techniques find not only the optimal solution value, but the solution as well. Our techniques lead to significant improvements in the theoretical running time compared with previously known methods (interior point methods, Ellipsoid algorithm, Vaidya's algorithm). We use our method to the solution of the LP relaxation and the Langrangean dual of several classical combinatorial problems, like the traveling salesman problem, the vehicle routing problem, the Steiner tree problem, thek-connected problem, multicommodity flows, network design problems, network flow problems with side constraints, facility location problems,K-polymatroid intersection, multiple item capacitated lot sizing problem, and stochastic programming. In all these problems our techniques significantly improve the theoretical running time and yield the fastest way to solve them.  相似文献   

13.
We present an implementation of the LP Dual Active Set Algorithm (LP DASA) based on a quadratic proximal approximation, a strategy for dropping inactive equations from the constraints, and recently developed algorithms for updating a sparse Cholesky factorization after a low-rank change. Although our main focus is linear programming, the first and second-order proximal techniques that we develop are applicable to general concave–convex Lagrangians and to linear equality and inequality constraints. We use Netlib LP test problems to compare our proximal implementation of LP DASA to Simplex and Barrier algorithms as implemented in CPLEX. This material is based upon work supported by the National Science Foundation under Grant No. 0203270.  相似文献   

14.
The constrained maximum flow problem is to send the maximum flow from a source to a sink in a directed capacitated network where each arc has a cost and the total cost of the flow cannot exceed a budget. This problem is similar to some variants of classical problems such as the constrained shortest path problem, constrained transportation problem, or constrained assignment problem, all of which have important applications in practice. The constrained maximum flow problem itself has important applications, such as in logistics, telecommunications and computer networks. In this research, we present an efficient specialized network simplex algorithm that significantly outperforms the two widely used LP solvers: CPLEX and lp_solve. We report CPU times of an average of 27 times faster than CPLEX (with its dual simplex algorithm), the closest competitor of our algorithm.  相似文献   

15.
In this paper we discuss some instances where dense matrix techniques can be utilized within a sparse simplex linear programming solver. The main emphasis is on the use of the Schur complement matrix as a part of the basis matrix representation. This approach enables to represent the basis matrix as an easily invertible sparse matrix and one or more dense Schur complement matrices. We describe our variant of this method which uses updating of the QR factorization of the Schur complement matrix. We also discuss some implementation issues of the LP software package which is based on this approach.  相似文献   

16.
We present a simplification and generalization of the recent homogeneous and self-dual linear programming (LP) algorithm. The algorithm does not use any Big-M initial point and achieves -iteration complexity, wheren andL are the number of variables and the length of data of the LP problem. It also detects LP infeasibility based on a provable criterion. Its preliminary implementation with a simple predictor and corrector technique results in an efficient computer code in practice. In contrast to other interior-point methods, our code solves NETLIB problems, feasible or infeasible, starting simply fromx=e (primal variables),y=0 (dual variables),z=e (dual slack variables), wheree is the vector of all ones. We describe our computational experience in solving these problems, and compare our results with OB1.60, a state-of-the-art implementation of interior-point algorithms.Research supported in part by NSF Grant DDM-9207347 and by an Iowa College of Business Administration Summer Grant.Part of this work was done while the author was on a sabbatical leave from the University of Iowa and visiting the Cornell Theory Center, Cornell University, Ithaca, NY 14853, USA, supported in part by the Cornell Center for Applied Mathematics and by the Advanced Computing Research Institute, a unit of the Cornell Theory Center, which receives major funding from the National Science Foundation and IBM Corporation, with additional support from New York State and members of its Corporate Research Institute.  相似文献   

17.
We propose a novel algorithm for solving multiparametric linear programming problems. Rather than visiting different bases of the associated LP tableau, we follow a geometric approach based on the direct exploration of the parameter space. The resulting algorithm has computational advantages, namely the simplicity of its implementation in a recursive form and an efficient handling of primal and dual degeneracy. Illustrative examples describe the approach throughout the paper. The algorithm is used to solve finite-time constrained optimal control problems for discrete-time linear dynamical systems.  相似文献   

18.
An efficient algorithm is proposed for finding all solutions of nonlinear equations using linear programming (LP). This algorithm is based on a simple test (termed the LP test) for nonexistence of a solution to a system of nonlinear equations in a given region. In the conventional LP test, the system of nonlinear equations is transformed into an LP problem, to which the simplex method is applied. However, although the LP test is very powerful, it requires many pivotings for each region. In this paper, we use the dual simplex method in the LP test, which makes the average number of pivotings per region much smaller (less than one, for example) and makes the algorithm very efficient. By numerical examples, it is shown that the proposed algorithm can find all solutions of systems of 200 nonlinear equations in practical computation time.  相似文献   

19.
A specialization of the dual simplex method is developed for solving the linear programming (LP) knapsack problem subject to generalized upper bound (GUB) constraints. The LP/GUB knapsack problem is of interest both for solving more general LP problems by the dual simplex method, and for applying surrogate constraint strategies to the solution of 0–1 Multiple Choice integer programming problems. We provide computational bounds for our method of o(n logn), wheren is the total number of problem variables. These bounds reduce the previous best estimate of the order of complexity of the LP/GUB knapsack problem (due to Witzgall) and provide connections to computational bounds for the ordinary knapsack problem.We further provide a variant of our method which has only slightly inferior worst case bounds, yet which is ideally suited to solving integer multiple choice problems due to its ability to post-optimize while retaining variables otherwise weeded out by convex dominance criteria.  相似文献   

20.
As linear programs have grown larger and more complex, infeasible models are appearing more frequently. Because of the scale and complexity of the models, automated assistance is very often needed in determining the cause of the infeasibility so that model repairs can be made. Fortunately, researchers have developed algorithms for analysing infeasible LPs in recent years, and these have lately found their way into commercial LP computer codes. This paper briefly reviews the underlying algorithms, surveys the computer codes, and compares their performance on a set of test problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号