共查询到20条相似文献,搜索用时 15 毫秒
1.
Clarkson's algorithm is a three-staged randomized algorithm for solving linear programs. This algorithm has been simplified and adapted to fit the framework of LP-type problems. In this framework we can tackle a number of non-linear problems such as computing the smallest enclosing ball of a set of points in Rd. In 2006, it has been shown that the algorithm in its original form works for violator spaces too, which are a proper generalization of LP-type problems. It was not clear, however, whether previous simplifications of the algorithm carry over to the new setting.In this paper we show the following theoretical results: (a) It is shown, for the first time, that Clarkson's second stage can be simplified. (b) The previous simplifications of Clarkson's first stage carry over to the violator space setting. (c) The equivalence of violator spaces and partitions of the hypercube by hypercubes. 相似文献
2.
Yinyu Ye 《Mathematical Programming》1992,57(1-3):325-335
It has been shown [8] that numerous interior-point algorithms for linear programming (LP) generate solution sequences that converge to strict complementarity solutions, or interior solutions on the optimal face. In this note we further establish a theoretical base for Gay's test (Gay, 1989) to identify the optimal face, and develop a new termination procedure to obtain an exact solution on the optimal face. We also report some numerical results for solving a set of LP test problems, each of which has a highly degenerate and unbounded optimal face.Research supported in part by NSF Grant DDM-8922636, The Iowa Business School Summer Grant, and the Interdisciplinary Research Grant of the University of Iowa Center for Advanced Studies. 相似文献
3.
A parallel successive overrelaxation (SOR) method is proposed for the solution of the fundamental symmetric linear complementarity problem. Convergence is established under a relaxation factor which approaches the classical value of 2 for a loosely coupled problem. The parallel SOR approach is then applied to solve the symmetric linear complementarity problem associated with the least norm solution of a linear program.This work was sponsored by the United States Army under Contract No. DAAG29-80-C-0041. This material is based on research sponsored by National Science Foundation Grant DCR-84-20963 and Air Force Office of Scientific Research Grants AFOSR-ISSA-85-00080 and AFOSR-86-0172.on leave from CRAI, Rende, Cosenza, Italy. 相似文献
4.
Parallel gradient projection successive overrelaxation for symmetric linear complementarity problems and linear programs 总被引:2,自引:0,他引:2
A gradient projection successive overrelaxation (GP-SOR) algorithm is proposed for the solution of symmetric linear complementary problems and linear programs. A key distinguishing feature of this algorithm is that when appropriately parallelized, the relaxation factor interval (0, 2) isnot reduced. In a previously proposed parallel SOR scheme, the substantially reduced relaxation interval mandated by the coupling terms of the problem often led to slow convergence. The proposed parallel algorithm solves a general linear program by finding its least 2-norm solution. Efficiency of the algorithm is in the 50 to 100 percent range as demonstrated by computational results on the CRYSTAL token-ring multicomputer and the Sequent Balance 21000 multiprocessor.This material is based on research supported by National Science Foundation Grants DCR-8420963 and DCR-8521228 and Air Force Office of Scientific Research Grants AFOSR-86-0172 and AFOSR-86-0255. 相似文献
5.
In this paper we propose an O(n
3
L) algorithm which is a modification of the path following algorithm [8] for a linear complementarity problem. The path following algorithm has to take a short step size in each iteration in order to bound the number of overall arithmetic operations by O(n
3
L). In practical computation, we can determine the step size adaptively. Mizuno, Yoshise, and Kikuchi [11] reported that such an adaptive algorithm required about O(L) iterations for some test problems. Here we show that we can use a rank one update technique in the adaptive algorithm so that the number of overall arithmetic operations is theoretically bounded by O(n
3
L).Research supported in part by the U.S. Army Research Office through the Mathematical Sciences Institute of Cornell University.Research supported in part by NSF grants ECS-8602534 and DMS-8904406 and ONR contract N-00014-87-K0212. 相似文献
6.
Renato D. C. Monteiro Stephen J. Wright 《Computational Optimization and Applications》1994,3(2):131-155
Most asymptotic convergence analysis of interior-point algorithms for monotone linear complementarity problems assumes that the problem is nondegenerate, that is, the solution set contains a strictly complementary solution. We investigate the behavior of these algorithms when this assumption is removed.The work of this author was based on research supported by the National Science Foundation under grant DDM-9109404 and the Office of Naval Research under grant N00014-93-1-0234.The work of this author was based on research supported by the Office of Scientific Computing, U.S. Department of Energy, under Contract W-31-109-Eng-38. 相似文献
7.
Craig A. Tovey 《Mathematical Programming》1986,35(2):193-224
We present a general abstract model of local improvement, applicable to such diverse cases as principal pivoting methods for the linear complementarity problem and hill climbing in artificial intelligence. The model accurately predicts the behavior of the algorithms, and allows for a variety of probabilistic assumptions that permit degeneracy. Simulation indicates an approximately linear average number of iterations under a variety of probability assumptions. We derive theoretical bounds of 2en logn and en
2 for different distributions, respectively, as well as polynomial bounds for a broad class of probability distributions. We conclude with a discussion of the applications of the model to LCP and linear programming.The author was supported by the New Faculty Research Development Program of the Georgia Institute of Technology. This work is based on the author's Ph.D. thesis, performed under George Dantzig at Stanford 1978–81, at the Systems Optimization Laboratory. While at Stanford, research was supported in part by Department of Energy Contract AM03-76SF00326, PA #DE-AT03-76ER72018; Office of Naval Research Contract N00014-75-C-0267; National Science Foundation Grants MCS76-81259, MCS-7926009 and ECS-8012974; and Army Research Office Contract DAA29-79-C-0110. Reproduction in whole or in part is permitted for any purpose of the U.S. Government. 相似文献
8.
In this paper we describe a computational study of block principal pivoting (BP) and interior-point predictor-corrector (PC) algorithms for the solution of large-scale linear complementarity problems (LCP) with symmetric positive definite matrices. This study shows that these algorithms are in general quite appropriate for this type of LCPs. The BP algorithm does not seem to be sensitive to bad scaling and degeneracy of the unique solution of the LCP, while these aspects have some effect on the performance of the PC algorithm. On the other hand, the BP method has not performed well in two LCPs with ill-conditioned matrices for which the PC algorithm has behaved quite well.A hybrid algorithm combining these two techniques is also introduced and seems to be the most robust procedure for the solution of large-scale LCPs with symmetric positive definite matrices.Support of this work has been provided by the Instituto de Telecomunicações. 相似文献
9.
Aniekan Ebiefung 《Mathematical Programming》1995,69(1-3):255-268
We show that the Cottle—Dantzig generalized linear complementarity problem (GLCP) is equivalent to a nonlinear complementarity problem (NLCP), a piecewise linear system of equations (PLS), a multiple objective programming problem (MOP), and a variational inequalities problem (VIP). On the basis of these equivalences, we provide an algorithm for solving problem GLCP.Project partially supported by a grant from Oak Ridge Associated Universities, TN, USA. 相似文献
10.
Tommaso Schiavinotto Thomas Stützle 《Journal of Mathematical Modelling and Algorithms》2004,3(4):367-402
The linear ordering problem is an NP-hard problem that arises in a variety of applications. Due to its interest in practice, it has received considerable attention and a variety of algorithmic approaches to its solution have been proposed. In this paper we give a detailed search space analysis of available benchmark instance classes that have been used in various researches. The large fitness-distance correlations observed for many of these instances suggest that adaptive restart algorithms like iterated local search or memetic algorithms, which iteratively generate new starting solutions for a local search based on previous search experience, are promising candidates for obtaining high performing algorithms. We therefore experimentally compared two such algorithms and the final experimental results suggest that, in particular, the memetic algorithm is a new state-of-the-art approach to the linear ordering problem. 相似文献
11.
The purpose of this paper is to present some results on linear programming in measure spaces (LPM). We prove that, under certain conditions, the optimal value of an LPM is equal to the optimal value of the dual problem (DLPM). We also present two algorithms for solving various LPM problems and prove the convergence properties of these algorithms. 相似文献
12.
In this paper, we introduce a variant of a cutting plane algorithm and show that this algorithm reduces to the well-known Dinkelbach-type procedure of Crouzeix, Ferland, and Schaible if the optimization problem is a generalized fractional program. By this observation, an easy geometrical interpretation of one of the most important algorithms in generalized fractional programming is obtained. Moreover, it is shown that the convergence of the Dinkelbach-type procedure is a direct consequence of the properties of this cutting plane method. Finally, a class of generalized fractional programs is considered where the standard positivity assumption on the denominators of the ratios of the objective function has to be imposed explicitly. It is also shown that, when using a Dinkelbach-type approach for this class of programs, the constraints ensuring the positivity on the denominators can be dropped.The authors like to thank the anonymous referees and Frank Plastria for their constructive remarks on an earlier version of this paper.This research was carried out at Erasmus University, Rotterdam, The Netherlands and was supported by JNICT, Lisboa, Portugal, under Contract BD/707/90-RM. 相似文献
13.
K. T. Medhi 《Journal of Optimization Theory and Applications》1991,69(2):285-296
We propose a parallel implementation of the classical Lemke's algorithm for solving the linear complementarity problem. The algorithm is designed for a loosely coupled network of computers which is characterized by relatively high communication costs. We provide an accurate prediction of speedup based on a simple operation count. The algorithm produces speedup nearp, wherep is the number of processors, when tested on large problems as demonstrated by computational results on the CRYSTAL token-ring multicomputer and the Sequent Balance 21000 multiprocessor.This material is based on research supported by National Science Foundation Grants DCR-84-20963 and DCR-850-21228 and by Air Force Office of Scientific Research Grants AFSOR-86-0172 and AFSOR-86-0255 while the author was at the University of Wisconsin, Madison, Wisconsin. 相似文献
14.
O. Güler 《Journal of Optimization Theory and Applications》1992,75(3):445-470
We introduce new augmented Lagrangian algorithms for linear programming which provide faster global convergence rates than the augmented algorithm of Polyak and Treti'akov. Our algorithm shares the same properties as the Polyak-Treti'akov algorithm in that it terminates in finitely many iterations and obtains both primal and dual optimal solutions. We present an implementable version of the algorithm which requires only approximate minimization at each iteration. We provide a global convergence rate for this version of the algorithm and show that the primal and dual points generated by the algorithm converge to the primal and dual optimal set, respectively. 相似文献
15.
We propose a two-stage successive overrelaxation method (TSOR) algorithm for solving the symmetric linear complementarity problem. After the first SOR preprocessing stage this algorithm concentrates on updating a certain prescribed subset of variables which is determined by exploiting the complementarity property. We demonstrate that this algorithm successfully solves problems with up to ten thousand variables.This material is based on research supported by National Science Foundation Grants DCR-8420963 and DCR-8521228 and Air Force Office of Scientific Research Grants AFSOR-86-0172 and AFSOR-86-0255 while the author was at the Computer Sciences Department at the University of Wisconsin-Madison, USA. 相似文献
16.
Given an integer polyhedron
, an integer point
, and a point
, the primal separation problem is the problem of finding a linear inequality which is valid for P
I
, violated by x
*, and satisfied at equality by
. The primal separation problem plays a key role in the primal approach to integer programming. In this paper we examine the complexity of primal separation for several well-known classes of inequalities for various important combinatorial optimization problems, including the knapsack, stable set and travelling salesman problems.Received: November 2002, Revised: March 2003, 相似文献
17.
In this paper, we present an extension to the NE/SQP method; the latter is a robust algorithm that we proposed for solving the nonlinear complementarity problem in an earlier article. In this extended version of NE/SQP, instead of exactly solving the quadratic program subproblems, approximate solutions are generated via an inexact rule.Under a proper choice for this rule, this inexact method is shown to inherit the same convergence properties of the original NE/SQP method. In addition to developing the convergence theory for the inexact method, we also present numerical results of the algorithm tested on two problems of varying size. 相似文献
18.
Given a real(finite-dimensional or infinite-dimensional) Hilbert space H with a Jordan product,we consider the Lorentz cone linear complementarity problem,denoted by LCP(T,Ω,q),where T is a continuous linear operator on H,ΩH is a Lorentz cone,and q ∈ H.We investigate some conditions for which the problem concerned has a unique solution for all q ∈ H(i.e.,T has the GUS-property).Several sufficient conditions and several necessary conditions are given.In particular,we provide two suficient and necessary cond... 相似文献
19.
We prove that RANDOM EDGE, the simplex algorithm that always chooses a random improving edge to proceed on, can take a mildly exponential number of steps in the model of abstract objective functions (introduced by Williamson Hoke [Completely unimodal numberings of a simple polytope, Discrete Appl. Math. 20 (1988) 69-81.] and by Kalai [A simple way to tell a simple polytope from its graph, J. Combin. Theory Ser. A 49(2) (1988) 381-383.] under different names). We define an abstract objective function on the n-dimensional cube for which the algorithm, started at a random vertex, needs at least exp(const·n1/3) steps with high probability. The best previous lower bound was quadratic. So in order for RANDOM EDGE to succeed in polynomial time, geometry must help. 相似文献
20.
Clovis C. Gonzaga 《Mathematical Programming》1989,43(1-3):151-173
The Linear Programming Problem is manipulated to be stated as a Non-Linear Programming Problem in which Karmarkar's logarithmic potential function is minimized in the positive cone generated by the original feasible set. The resulting problem is then solved by a master algorithm that iteratively rescales the problem and calls an internal unconstrained non-linear programming algorithm. Several different procedures for the internal algorithm are proposed, giving priority either to the reduction of the potential function or of the actual cost. We show that Karmarkar's algorithm is equivalent to this method in the special case in which the internal algorithm is reduced to a single steepest descent iteration. All variants of the new algorithm have the same complexity as Karmarkar's method, but the amount of computation is reduced by the fact that only one projection matrix must be calculated for each call of the internal algorithm.Research partly sponsored by CNPq-Brazilian National Council for Scientific and Technological Development, by National Science Foundation grant ECS-857362, Office of Naval Research contract N00014-86-K-0295, and AFOSR grant 86-0116.On leave from COPPE-Federal University of Rio de Janeiro, Cx. Postal 68511, 21941 Rio de Janeiro, RJ, Brasil. 相似文献