首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
In exact arithmetic, the simplex method applied to a particular linear programming problem instance with real data either shows that it is infeasible, shows that its dual is infeasible, or generates optimal solutions to both problems. Most interior-point methods, on the other hand, do not provide such clear-cut information. If the primal and dual problems have bounded nonempty sets of optimal solutions, they usually generate a sequence of primal or primaldual iterates that approach feasibility and optimality. But if the primal or dual instance is infeasible, most methods give less precise diagnostics. There are methods with finite convergence to an exact solution even with real data. Unfortunately, bounds on the required number of iterations for such methods applied to instances with real data are very hard to calculate and often quite large. Our concern is with obtaining information from inexact solutions after a moderate number of iterations. We provide general tools (extensions of the Farkas lemma) for concluding that a problem or its dual is likely (in a certain well-defined sense) to be infeasible, and apply them to develop stopping rules for a homogeneous self-dual algorithm and for a generic infeasible-interior-point method for linear programming. These rules allow precise conclusions to be drawn about the linear programming problem and its dual: either near-optimal solutions are produced, or we obtain certificates that all optimal solutions, or all feasible solutions to the primal or dual, must have large norm. Our rules thus allow more definitive interpretation of the output of such an algorithm than previous termination criteria. We give bounds on the number of iterations required before these rules apply. Our tools may also be useful for other iterative methods for linear programming. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.  相似文献   

2.
Every Newton step in an interior-point method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today's codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable ill-conditioning of linear systems and, hence, iterative methods fail to provide sufficient accuracy unless appropriately preconditioned. Two types of preconditioners which use some form of incomplete Cholesky factorization for indefinite systems are proposed in this paper. Although they involve significantly sparser factorizations than those used in direct approaches they still capture most of the numerical properties of the preconditioned system. The spectral analysis of the preconditioned matrix is performed: for convex optimization problems all the eigenvalues of this matrix are strictly positive. Numerical results are given for a set of public domain large linearly constrained convex quadratic programming problems with sizes reaching tens of thousands of variables. The analysis of these results reveals that the solution times for such problems on a modern PC are measured in minutes when direct methods are used and drop to seconds when iterative methods with appropriate preconditioners are used.  相似文献   

3.
This paper suggests an iterative parametric approach for solving multiobjective linear fractional programming (MOLFP) problems which only uses linear programming to obtain efficient solutions and always converges to an efficient solution. A numerical example shows that this approach performs better than some existing algorithms. Randomly generated MOLFP problems are also solved to demonstrate the performance of new introduced algorithm.  相似文献   

4.
In this work we devise efficient algorithms for finding the search directions for interior point methods applied to linear programming problems. There are two innovations. The first is the use of updating of preconditioners computed for previous barrier parameters. The second is an adaptive automated procedure for determining whether to use a direct or iterative solver, whether to reinitialize or update the preconditioner, and how many updates to apply. These decisions are based on predictions of the cost of using the different solvers to determine the next search direction, given costs in determining earlier directions. We summarize earlier results using a modified version of the OB1-R code of Lustig, Marsten, and Shanno, and we present results from a predictor–corrector code PCx modified to use adaptive iteration. If a direct method is appropriate for the problem, then our procedure chooses it, but when an iterative procedure is helpful, substantial gains in efficiency can be obtained.  相似文献   

5.
In order to study the behavior of interior-point methods on very large-scale linear programming problems, we consider the application of such methods to continuous semi-infinite linear programming problems in both primal and dual form. By considering different discretizations of such problems we are led to a certain invariance property for (finite-dimensional) interior-point methods. We find that while many methods are invariant, several, including all those with the currently best complexity bound, are not. We then devise natural extensions of invariant methods to the semi-infinite case. Our motivation comes from our belief that for a method to work well on large-scale linear programming problems, it should be effective on fine discretizations of a semi-infinite problem and it should have a natural extension to the limiting semi-infinite case.Research supported in part by NSF, AFORS and ONR through NSF grant DMS-8920550.  相似文献   

6.
This paper deals with a central question of structural optimization which is formulated as the problem of finding the stiffest structure which can be made when both the distribution of material as well as the material itself can be freely varied. We consider a general multi-load formulation and include the possibility of unilateral contact. The emphasis of the presentation is on numerical procedures for this type of problem, and we show that the problems after discretization can be rewritten as mathematical programming problems of special form. We propose iterative optimization algorithms based on penalty-barrier methods and interior-point methods and show a broad range of numerical examples that demonstrates the efficiency of our approach. Supported by the project 03ZO7BAY of BMBF (Germany) and the GIF-contract 10455-214.06/95.  相似文献   

7.
We consider implicit integration methods for the solution of stiff initial value problems for second-order differential equations of the special form y' = f(y). In implicit methods, we are faced with the problem of solving systems of implicit relations. This paper focuses on the construction and analysis of iterative solution methods which are effective in cases where the Jacobian of the right‐hand side of the differential equation can be split into a sum of matrices with a simple structure. These iterative methods consist of the modified Newton method and an iterative linear solver to deal with the linear Newton systems. The linear solver is based on the approximate factorization of the system matrix associated with the linear Newton systems. A number of convergence results are derived for the linear solver in the case where the Jacobian matrix can be split into commuting matrices. Such problems often arise in the spatial discretization of time‐dependent partial differential equations. Furthermore, the stability matrix and the order of accuracy of the integration process are derived in the case of a finite number of iterations. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

8.
An iterative linear programming algorithm for the solution of the convex programming problem is proposed. The algorithm partially solves a sequence of linear programming subproblems whose solution is shown to converge quadratically, superlinearly, or linearly to the solution of the convex program, depending on the accuracy to which the subproblems are solved. The given algorithm is related to inexact Newton methods for the nonlinear complementarity problem. Preliminary results for an implementation of the algorithm are given.This material is based on research supported by the National Science Foundation, Grants DCR-8521228 and CCR-8723091, and by the Air Force Office of Scientific Research, Grant AFOSR-86-0172. The author would like to thank Professor O. L. Mangasarian for stimulating discussions during the preparation of this paper.  相似文献   

9.
This paper deals with regularized penalty-barrier methods for convex programming problems. In the spirit of an iterative proximal regularization approach, an interior-point method is constructed, in which at each step a strongly convex function has to be minimized and the prox-term can be scaled by a variable scaling factor. The convergence of the method is studied for an axiomatically given class of barrier functions. According to the results, a wide class of barrier functions (in particular, logarithmic and exponential functions) can be applied to design special algorithms. For the method with a logarithmic barrier, the rate of convergence is investigated and assumptions that ensure linear convergence are given.  相似文献   

10.
Regularization techniques, i.e., modifications on the diagonal elements of the scaling matrix, are considered to be important methods in interior point implementations. So far, regularization in interior point methods has been described for linear programming problems, in which case the scaling matrix is diagonal. It was shown that by regularization, free variables can be handled in a numerically stable way by avoiding column splitting that makes the set of optimal solutions unbounded. Regularization also proved to be efficient for increasing the numerical stability of the computations during the solutions of ill-posed linear programming problems. In this paper, we study the factorization of the augmented system arising in interior point methods. In our investigation, we generalize the methods developed and used in linear programming to the case when the scaling matrix is positive semidefinite, but not diagonal. We show that regularization techniques may be applied beyond the linear programming case.  相似文献   

11.
This paper is devoted to globally convergent methods for solving large sparse systems of nonlinear equations with an inexact approximation of the Jacobian matrix. These methods include difference versions of the Newton method and various quasi-Newton methods. We propose a class of trust region methods together with a proof of their global convergence and describe an implementable globally convergent algorithm which can be used as a realization of these methods. Considerable attention is concentrated on the application of conjugate gradient-type iterative methods to the solution of linear subproblems. We prove that both the GMRES and the smoothed COS well-preconditioned methods can be used for the construction of globally convergent trust region methods. The efficiency of our algorithm is demonstrated computationally by using a large collection of sparse test problems.  相似文献   

12.
On multilevel iterative methods for optimization problems   总被引:2,自引:0,他引:2  
This paper is concerned with multilevel iterative methods which combine a descent scheme with a hierarchy of auxiliary problems in lower dimensional subspaces. The construction of auxiliary problems as well as applications to elasto-plastic model and linear programming are described. The auxiliary problem for the dual of a perturbed linear program is interpreted as a dual of perturbed aggregated linear program. Coercivity of the objective function over the feasible set is sufficient for the boundedness of the iterates. Equivalents of this condition are presented in special cases.Supported by NSF under grant DMS-8704169, AFOSR under grant 86-0126, and ONR under Contract N00014-83-K-0104. Consulting for American Airlines Decision Technologies, MD 2C55, P.O. Box 619616, DFW, TX 75261-9616, USA.Supported by NSF under grant DMS-8704169 and AFOSR under grant 86-0126.  相似文献   

13.
In this paper we propose a primal-dual interior-point method for large, sparse, quadratic programming problems. The method is based on a reduction presented by Gonzalez-Lima, Wei, and Wolkowicz [14] in order to solve the linear systems arising in the primal-dual methods for linear programming. The main features of this reduction is that it is well defined at the solution set and it preserves sparsity. These properties add robustness and stability to the algorithm and very accurate solutions can be obtained. We describe the method and we consider different reductions using the same framework. We discuss the relationship of our proposals and the one used in the LOQO code. We compare and study the different approaches by performing numerical experimentation using problems from the Maros and Meszaros collection. We also include a brief discussion on the meaning and effect of ill-conditioning when solving linear systems.This work was partially supported by DID-USB (GID-001).  相似文献   

14.
In this paper we develop new primal-dual interior-point methods for linear programming problems, which are based on the concept of parabolic target space. We show that such schemes work in the infinity-neighborhood of the primal-dual central path. Nevertheless, these methods possess the best known complexity estimate. We demonstrate that the adaptive-step path-following strategies can be naturally incorporated in such schemes.  相似文献   

15.
We study the problem of solving a constrained system of nonlinear equations by a combination of the classical damped Newton method for (unconstrained) smooth equations and the recent interior point potential reduction methods for linear programs, linear and nonlinear complementarity problems. In general, constrained equations provide a unified formulation for many mathematical programming problems, including complementarity problems of various kinds and the Karush-Kuhn-Tucker systems of variational inequalities and nonlinear programs. Combining ideas from the damped Newton and interior point methods, we present an iterative algorithm for solving a constrained system of equations and investigate its convergence properties. Specialization of the algorithm and its convergence analysis to complementarity problems of various kinds and the Karush-Kuhn-Tucker systems of variational inequalities are discussed in detail. We also report the computational results of the implementation of the algorithm for solving several classes of convex programs. The work of this author was based on research supported by the National Science Foundation under grants DDM-9104078 and CCR-9213739 and the Office of Naval Research under grant N00014-93-1-0228. The work of this author was based on research supported by the National Science Foundation under grant DMI-9496178 and the Office of Naval Research under grants N00014-93-1-0234 and N00014-94-1-0340.  相似文献   

16.
This paper considers the analysis of process networks with bottlenecks and shows how they may be regarded as simple multi-source maximal flow linear programming problems. We surveyed over 30 Production/Operations Management and management science/OR textbooks, finding that only iterative trial-and-error procedures are now being suggested for this kind of analysis. The maximal flow network approach is easier for complex problems and also allows several advantages not available in the trial-and-error approaches. This paper also discusses the use of a simple linear programming sensitivity result called radial change. The modelling approach suggested here can provide new ideas for improving system capacity following the application of Theory of Constraints techniques.  相似文献   

17.
We provide a survey of interior-point methods for linear programming and its extensions that are based on reducing a suitable potential function at each iteration. We give a fairly complete overview of potential-reduction methods for linear programming, focusing on the possibility of taking long steps and the properties of the barrier function that are necessary for the analysis. We then describe briefly how the methods and results can be extended to certain convex programming problems, following the approach of Nesterov and Todd. We conclude with some open problems. Research supported in part by NSF, AFOSR and ONR through NSF Grant DMS-8920550. Some of this work was done while the author was on a sabbatical leave from Cornell University visiting the Department of Mathematics at the University of Washington.  相似文献   

18.
For solving nonsymmetric linear systems, the well-known GMRES method is considered to be a stable method; however, the work per iteration increases as the number of iterations increases. We consider two new iterative methods GGMRES and MGMRES, which are a generalization and a modification of the GMRES method, respectively. Instead of using a minimization condition as in the derivation of GGMRES, we use a Galerkin condition to derive the MGMRES method. We also introduce another new iterative method, LAN/MGMRES, which is designed to combine the reliability of GMRES with the reduced work of a Lanczos-type method. A computer program has been written based on the use of the LAN/MGMRES algorithm for solving nonsymmetric linear systems arising from certain elliptic problems. Numerical tests are presented comparing this algorithm with some other commonly used iterative algorithms. These preliminary tests of the LAN/MGMRES algorithm show that it is comparable in terms of both the approximate number of iterations and the overall convergence behavior. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

19.
Block (including s‐step) iterative methods for (non)symmetric linear systems have been studied and implemented in the past. In this article we present a (combined) block s‐step Krylov iterative method for nonsymmetric linear systems. We then consider the problem of applying any block iterative method to solve a linear system with one right‐hand side using many linearly independent initial residual vectors. We present a new algorithm which combines the many solutions obtained (by any block iterative method) into a single solution to the linear system. This approach of using block methods in order to increase the parallelism of Krylov methods is very useful in parallel systems. We implemented the new method on a parallel computer and we ran tests to validate the accuracy and the performance of the proposed methods. It is expected that the block s‐step methods performance will scale well on other parallel systems because of their efficient use of memory hierarchies and their reduction of the number of global communication operations over the standard methods. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
Large-scale generalized Sylvester equations appear in several important applications. Although the involved operator is linear, solving them requires specialized techniques. Different numerical methods have been designed to solve them, including direct factorization methods suitable for small size problems, and Krylov-type iterative methods for large-scale problems. For these iterative schemes, preconditioning is always a difficult task that deserves to be addressed. We present and analyze an implicit preconditioning strategy specially designed for solving generalized Sylvester equations that uses a preconditioned residual direction at every iteration. The advantage is that the preconditioned direction is built implicitly, avoiding the explicit knowledge of the given matrices. Only the effect of the matrix-vector product with the given matrices is required. We present encouraging numerical experiments for a set of different problems coming from several applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号