共查询到20条相似文献,搜索用时 14 毫秒
1.
This paper presents a “standard form” variant of Karmarkar's algorithm for linear programming. The tecniques of using duality
and cutting objective are combined in this variant to maintain polynomial-time complexity and to bypass the difficulties found
in Karmarkar's original algorithm. The variant works with problems in standard form and simultaneously generates sequences
of primal and dual feasible solutions whose objective function values converge to the unknown optimal value. Some computational
results are also reported. 相似文献
2.
We present a primal interior point method for convex quadratic programming which is based upon a logarithmic barrier function approach. This approach generates a sequence of problems, each of which is approximately solved by taking a single Newton step. It is shown that the method requires
iterations and O(n
3.5
L) arithmetic operations. By using modified Newton steps the number of arithmetic operations required by the algorithm can be reduced to O(n
3
L).This research was supported in part by NSF Grant DMS-85-12277 and ONR Contract N-00014-87-K0214. It was presented at the Meeting on Mathematische Optimierung, Mathematisches Forschungsinstitut, Oberwolfach, West Germany, January 3–9, 1988. 相似文献
3.
Convex quadratically constrained quadratic problems are considered. It is shown that such problems can be transformed to aconic form. The feasible set of the conic form is the intersection of a direct product of standard quadratic cones intersected with
a hyperplane (the analogue of a simplex), and a linear subspace. For a problem of such form, the analogue of Karmarkar's projective
transformation and logarithmic barrier function are built. This allows us to extend “word by word” the method of Karmarkar
for LP to QCQP and allows us to obtain a similar polynomial worst-case bound for the number of iterations. 相似文献
4.
An implementation of Karmarkar's algorithm for linear programming 总被引:14,自引:0,他引:14
Ilan Adler Mauricio G. C. Resende Geraldo Veiga Narendra Karmarkar 《Mathematical Programming》1989,44(1-3):297-335
This paper describes the implementation of power series dual affine scaling variants of Karmarkar's algorithm for linear programming. Based on a continuous version of Karmarkar's algorithm, two variants resulting from first and second order approximations of the continuous trajectory are implemented and tested. Linear programs are expressed in an inequality form, which allows for the inexact computation of the algorithm's direction of improvement, resulting in a significant computational advantage. Implementation issues particular to this family of algorithms, such as treatment of dense columns, are discussed. The code is tested on several standard linear programming problems and compares favorably with the simplex codeMinos 4.0. 相似文献
5.
J. E. Mitchell 《Journal of Optimization Theory and Applications》1993,78(1):127-142
We give two results related to Gonzaga's recent paper showing that lower bounds derived from the Todd-Burrell update can be obtained by solving a one-variable linear programming problem involving the centering direction and the affine direction. We show how these results may be used to update the primal solution when using the dual affine variant of Karmarkar's algorithm. This leads to a dual projective algorithm.This research was partially supported by ONR Grant Number N00014-90-J-1714. 相似文献
6.
David M. Gay 《Mathematical Programming》1987,37(1):81-90
This paper presents a variant of Karmarkar's linear programming algorithm that works directly with problems expressed in standard
form and requires no a priori knowledge of the optimal objective function value. Rather, it uses a variation on Todd and Burrell's
approach to compute ever better bounds on the optimal value, and it can be run as a prima-dual algorithm that produces sequences
of primal and dual feasible solutions whose objective function values convege to this value. The only restrictive assumption
is that the feasible region is bounded with a nonempty interior; compactness of the feasible region can be relaxed to compactness
of the (nonempty) set of optimal solutions. 相似文献
7.
Michael J. Todd 《Mathematical Programming》1988,41(1-3):97-113
We propose methods to take advantage of specially-structured constraints in a variant of Karmarkar's projective algorithm for standard form linear programming problems. We can use these constraints to generate improved bounds on the optimal value of the problem and also to compute the necessary projections more efficiently, while maintaining the theoretical bound on the algorithm's performance. It is shown how various upper-bounding constraints can be handled implicitly in this way. Unfortunately, the situation for network constraints appears less favorable.Research supported in part by National Science Foundation Grant ECS-8602534, ONR Contract N00014-87-K-0212 and the US Army Research Office through the Mathematical Sciences Institute of Cornell University. 相似文献
8.
B. Kalantari 《Mathematical Programming》1990,46(1-3):73-78
We define a function of the interior of the feasible region of Karmarkar's canonical linear program with the property that its supremum is bounded above by one, if and only if the optimal value of the linear program is zero. We use this function to obtain data-dependent steps for Karmarkar's algorithm. The corresponding reduction in the potential function improves upon the data-independent reduction derived by Padberg. We analyze the general step and consider its potential practical advantages. 相似文献
9.
Yinyu Ye 《Mathematical Programming》1990,46(1-3):61-72
We propose a build-down scheme for Karmarkar's algorithm and the simplex method for linear programming. The scheme starts with an optimal basis candidate set including all columns of the constraint matrix, then constructs a dual ellipsoid containing all optimal dual solutions. A pricing rule is developed for checking whether or not a dual hyperplane corresponding to a column intersects the containing ellipsoid. If the dual hyperplane has no intersection with the ellipsoid, its corresponding column will not appear in any of the optimal bases, and can be eliminated from. As these methods iterate, is eventually built-down to a set that contains only the optimal basic columns. 相似文献
10.
M. M. Kostreva 《Journal of Optimization Theory and Applications》1989,62(1):63-76
Murty's algorithm for the linear complementarity problem is generalized to solve the optimality conditions for linear and convex quadratic programming problems with both equality and inequality constraints. An implementation is suggested which provides both efficiency and tight error control. Numerical experiments as well as field tests in various applications show favorable results.The author thanks K. G. Murty for his encouragement and helpful comments. 相似文献
11.
M. J. D. Powell 《Mathematical Programming》1993,62(1-3):153-197
Karmarkar's algorithm for linear programming was published in 1984, and it is highly important to both theory and practice. On the practical side some of its variants have been found to be far more efficient than the simplex method on a wide range of very large calculations, while its polynomial time properties are fundamental to research on complexity. These properties depend on the fact that each iteration reduces a potential function by an amount that is bounded away from zero, the bound being independent of all the coefficients that occur. It follows that, under mild conditions on the initial vector of variables, the number of iterations that are needed to achieve a prescribed accuracy in the final value of the linear objective function is at most a multiple ofn, wheren is the number of inequality constraints. By considering a simple example that allowsn to be arbitrarily large, we deduce analytically that the magnitude of this complexity bound is correct. Specifically, we prove that the solution of the example by Karmarkar's original algorithm can require aboutn/20 iterations. Further, we find that the algorithm makes changes to the variables that are closely related to the steps of the simplex method.This paper is dedicated to Phil Wolfe on the occasion of his 65th birthday. 相似文献
12.
Miroslav D. Asic Vera V. Kovacevic-Vujcic Mirjana D. Radosavljevic-Nikolic 《Mathematical Programming》1990,46(1-3):173-190
The asymptotic behaviour of Karmarkar's method is studied and an estimate of the rate of the objective function value decrease is given. Two possible sources of numerical instability are discussed and a stabilizing procedure is proposed.Research supported in part by Republicka zajednica za nauku SR Srbije. 相似文献
13.
We describe a new potential function and a sequence of ellipsoids in the path-following algorithm for convex quadratic programming. Each ellipsoid in the sequence contains all of the optimal primal and dual slack vectors. Furthermore, the volumes of the ellipsoids shrink at the ratio
, in comparison to 2–(1) in Karmarkar's algorithm and 2–(1/n) in the ellipsoid method. We also show how to use these ellipsoids to identify the optimal basis in the course of the algorithm for linear programming.Research supported by The U.S. Army Research Office through The Mathematical Sciences Institute of Cornell University when the author was visiting at Cornell.Research supported in part by National Science Foundation Grant ECS-8602534 and Office of Naval Research Contract N00014-87-K-0212. 相似文献
14.
Jean Bosco Etoa Etoa 《Applied mathematics and computation》2011,217(15):6680-6690
In this paper, we present a smoothing sequential quadratic programming to compute a solution of a quadratic convex bilevel programming problem. We use the Karush-Kuhn-Tucker optimality conditions of the lower level problem to obtain a nonsmooth optimization problem known to be a mathematical program with equilibrium constraints; the complementary conditions of the lower level problem are then appended to the upper level objective function with a classical penalty. These complementarity conditions are not relaxed from the constraints and they are reformulated as a system of smooth equations by mean of semismooth equations using Fisher-Burmeister functional. Then, using a quadratic sequential programming method, we solve a series of smooth, regular problems that progressively approximate the nonsmooth problem. Some preliminary computational results are reported, showing that our approach is efficient. 相似文献
15.
Described here is the structure and theory for a sequential quadratic programming algorithm for solving sparse nonlinear optimization problems. Also provided are the details of a computer implementation of the algorithm along with test results. The algorithm maintains a sparse approximation to the Cholesky factor of the Hessian of the Lagrangian. The solution to the quadratic program generated at each step is obtained by solving a dual quadratic program using a projected conjugate gradient algorithm. An updating procedure is employed that does not destroy sparsity. 相似文献
16.
This paper establishes a simple necessary and sufficient condition for the stability of a linearly constrained convex quadratic program under perturbations of the linear part of the data, including the constraint matrix. It also establishes results on the continuity and differentiability of the optimal objective value of the program as a function of a parameter specifying the magnitude of the perturbation. The results established herein directly generalize well-known results on the stability of linear programs. 相似文献
17.
Kurt M. Anstreicher 《Mathematical Programming》1989,43(1-3):209-223
We devise a projective algorithm which explicitly considers the constraint that an artificial variable be zero at the solution. Inclusion of such a constraint allows the algorithm to be applied to a (possibly infeasible) standard form linear program, without the addition of any bigM terms or conversion to a primal-dual problem. 相似文献
18.
A neural network is proposed for solving a convex quadratic bilevel programming problem. Based on Lyapunov and LaSalle theories, we prove strictly an important theoretical result that, for an arbitrary initial point, the trajectory of the proposed network does converge to the equilibrium, which corresponds to the optimal solution of a convex quadratic bilevel programming problem. Numerical simulation results show that the proposed neural network is feasible and efficient for a convex quadratic bilevel programming problem. 相似文献
19.
Karmarkar's linear programming algorithm and Newton's method 总被引:1,自引:0,他引:1
This paper describes a full-dimensional version of Karmarkar's linear programming algorithm, theprojective scaling algorithm, which is defined for any linear program in
n
having a bounded, full-dimensional polytope of feasible solutions. If such a linear program hasm inequality constraints, then it is equivalent under an injective affine mappingJ:
n
m
to Karmarkar's original algorithm for a linear program in
m
havingm—n equality constraints andm inequality constraints. Karmarkar's original algorithm minimizes a potential functiong(x), and the projective scaling algorithm is equivalent to that version of Karmarkar's algorithm whose step size minimizes the potential function in the step direction.The projective scaling algorithm is shown to be a global Newton method for minimizing a logarithmic barrier function in a suitable coordinate system. The new coordinate system is obtained from the original coordinate system by a fixed projective transformationy = (x) which maps the hyperplaneH
opt ={x:c, x =c
0} specified by the optimal value of the objective function to the hyperplane at infinity. The feasible solution set is mapped under to anunbounded polytope. Letf
LB(y) denote the logarithmic barrier function associated to them inequality constraints in the new coordinate system. It coincides up to an additive constant with Karmarkar's potential function in the new coordinate system. Theglobal Newton method iterate
y
* for a strictly convex functionf(y) defined on a suitable convex domain is that pointy
* that minimizesf(y) on the search ray {y+
v
N(y): 0} wherev
N(y) =–(2
f(y))–1(f(y)) is the Newton's method vector. If {x
(k)} are a set of projective scaling algorithm iterates in the original coordinate system andy
(k) =(x
(k)) then {y
(k)} are a set of global Newton method iterates forf
LB(y) and conversely.Karmarkar's algorithm with step size chosen to minimize the potential function is known to converge at least at a linear rate. It is shown (by example) that this algorithm does not have a superlinear convergence rate. 相似文献
20.
Kurt M. Anstreicher 《Mathematical Programming》1993,62(1-3):517-535
In a recent paper, Shaw and Goldfarb show that a version of the standard form projective algorithm can achieve
step complexity, as opposed to the O(nL) step complexity originally demonstrated for the algorithm. The analysis of Shaw and Goldfarb shows that the algorithm, using a constant, fixed steplength, approximately follows the central trajectory. In this paper we show that simple modifications of the projective algorithm obtain the same complexity improvement, while permitting a linesearch of the potential function on each step. An essential component is the addition of a single constraint, motivated by Shaw and Goldfarb's analysis, which makes the standard form algorithm strictly monotone in the true objective.This paper was written while the author was a research fellow at the Center for Operations Research and Econometrics, Université Catholique de Louvain, Louvain-la-Neuve, Belgium. Research supported by CORE, and the Center for Advanced Studies, University of Iowa. 相似文献