共查询到20条相似文献,搜索用时 15 毫秒
1.
Received August 29, 1996 / Revised version received May 1, 1998 Published online October 21, 1998 相似文献
2.
3.
Sensitivity analysis in linear programming and semidefinite programming using interior-point methods
We analyze perturbations of the right-hand side and the cost parameters in linear programming (LP) and semidefinite programming
(SDP). We obtain tight bounds on the perturbations that allow interior-point methods to recover feasible and near-optimal
solutions in a single interior-point iteration. For the unique, nondegenerate solution case in LP, we show that the bounds
obtained using interior-point methods compare nicely with the bounds arising from using the optimal basis. We also present
explicit bounds for SDP using the Monteiro-Zhang family of search directions and specialize them to the AHO, H..K..M, and
NT directions.
Received: December 1999 / Accepted: January 2001?Published online March 22, 2001 相似文献
4.
Given a linear transformation L:?
n
→?
n
and a matrix Q∈?
n
, where ?
n
is the space of all symmetric real n×n matrices, we consider the semidefinite linear complementarity problem SDLCP(L,?
n
+,Q) over the cone ?
n
+ of symmetric n×n positive semidefinite matrices. For such problems, we introduce the P-property and its variants, Q- and GUS-properties. For a matrix A∈R
n×n
, we consider the linear transformation L
A
:?
n
→?
n
defined by L
A
(X):=AX+XA
T
and show that the P- and Q-properties for L
A
are equivalent to A being positive stable, i.e., real parts of eigenvalues of A are positive. As a special case of this equivalence, we deduce a theorem of Lyapunov.
Received: March 1999 / Accepted: November 1999?Published online April 20, 2000 相似文献
5.
Petra Renáta Takács 《Optimization》2018,67(6):889-905
We introduce an interior-point method for symmetric optimization based on a new method for determining search directions. In order to accomplish this, we use a new equivalent algebraic transformation on the centring equation of the system which characterizes the central path. In this way, we obtain a new class of directions. We analyse a special case of this class, which leads to the new interior-point algorithm mentioned before. Another way to find the search directions is using barriers derived from kernel functions. We show that in our case the corresponding direction cannot be deduced from a usual kernel function. In spite of this fact, we prove the polynomial complexity of the proposed algorithm. 相似文献
6.
选择合适的核函数对设计求解线性规划与半正定规划的原始对偶内点算法以及复杂性分析都十分重要.Bai等针对线性规划提出三种核函数,并给出求解线性规划的大步迭代复杂界,但未给出数值算例验证算法的实际效果(Bai Y Q,Xie W,Zhang J.New parameterized kernel functions for linear optimization.J Global Optim,2012.DOI 10.1007/s10898-012-9934-z).基于这三种核函数设计了新的求解半正定规划问题的原始对内点算法.进一步分析了算法关于大步方法的计算复杂性界,同时通过数值算例验证了算法的有效性和核函数所带参数对计算复杂性的影响. 相似文献
7.
Kernel functions play an important role in defining new search directions for primal-dual interior-point algorithm for solving linear optimization problems. In this paper we present a new kernel function which yields an algorithm with the best known complexity bound for both large- and small-update methods. 相似文献
8.
Polynomial convergence of primal-dual algorithms for the second-order cone program based on the MZ-family of directions 总被引:5,自引:0,他引:5
In this paper we study primal-dual path-following algorithms for the second-order cone programming (SOCP) based on a family
of directions that is a natural extension of the Monteiro-Zhang (MZ) family for semidefinite programming. We show that the
polynomial iteration-complexity bounds of two well-known algorithms for linear programming, namely the short-step path-following
algorithm of Kojima et al. and Monteiro and Adler, and the predictor-corrector algorithm of Mizuno et al., carry over to the
context of SOCP, that is they have an O( logε-1) iteration-complexity to reduce the duality gap by a factor of ε, where n is the number of second-order cones. Since the MZ-type family studied in this paper includes an analogue of the Alizadeh,
Haeberly and Overton pure Newton direction, we establish for the first time the polynomial convergence of primal-dual algorithms
for SOCP based on this search direction.
Received: June 5, 1998 / Accepted: September 8, 1999?Published online April 20, 2000 相似文献
9.
Received April 15, 1997 / Revised version received July 22, 1998 Published online November 24, 1998 相似文献
10.
Based on the authors’ previous work which established theoretical foundations of two, conceptual, successive convex relaxation
methods, i.e., the SSDP (Successive Semidefinite Programming) Relaxation Method and the SSILP (Successive Semi-Infinite Linear Programming)
Relaxation Method, this paper proposes their implementable variants for general quadratic optimization problems. These problems
have a linear objective function c
T
x to be maximized over a nonconvex compact feasible region F described by a finite number of quadratic inequalities. We introduce two new techniques, “discretization” and “localization,”
into the SSDP and SSILP Relaxation Methods. The discretization technique makes it possible to approximate an infinite number
of semi-infinite SDPs (or semi-infinite LPs) which appeared at each iteration of the original methods by a finite number of
standard SDPs (or standard LPs) with a finite number of linear inequality constraints. We establish:?•Given any open convex set U containing F, there is an implementable discretization of the SSDP (or SSILP) Relaxation Method
which generates a compact convex set C such that F⊆C⊆U in a finite number of iterations.?The localization technique is for the cases where we are only interested in upper bounds on the optimal objective value (for
a fixed objective function vector c) but not in a global approximation of the convex hull of F. This technique allows us to generate a convex relaxation of F that is accurate only in certain directions in a neighborhood of the objective direction c. This cuts off redundant work to make the convex relaxation accurate in unnecessary directions. We establish:?•Given any positive number ε, there is an implementable localization-discretization of the SSDP (or SSILP) Relaxation Method
which generates an upper bound of the objective value within ε of its maximum in a finite number of iterations.
Received: June 30, 1998 / Accepted: May 18, 2000?Published online September 20, 2000 相似文献
11.
In this paper, we introduce a transformation that converts a class of linear and nonlinear semidefinite programming (SDP)
problems into nonlinear optimization problems. For those problems of interest, the transformation replaces matrix-valued constraints
by vector-valued ones, hence reducing the number of constraints by an order of magnitude. The class of transformable problems
includes instances of SDP relaxations of combinatorial optimization problems with binary variables as well as other important
SDP problems. We also derive gradient formulas for the objective function of the resulting nonlinear optimization problem
and show that both function and gradient evaluations have affordable complexities that effectively exploit the sparsity of
the problem data. This transformation, together with the efficient gradient formulas, enables the solution of very large-scale
SDP problems by gradient-based nonlinear optimization techniques. In particular, we propose a first-order log-barrier method
designed for solving a class of large-scale linear SDP problems. This algorithm operates entirely within the space of the
transformed problem while still maintaining close ties with both the primal and the dual of the original SDP problem. Global
convergence of the algorithm is established under mild and reasonable assumptions.
Received: January 5, 2000 / Accepted: October 2001?Published online February 14, 2002 相似文献
12.
An improved rounding method and semidefinite programming relaxation for graph partition 总被引:8,自引:0,他引:8
Given an undirected graph G=(V,E) with |V|=n and an integer k between 0 and n, the maximization graph partition (MAX-GP) problem is to determine a subset S⊂V of k nodes such that an objective function w(S) is maximized. The MAX-GP problem can be formulated as a binary quadratic program and it is NP-hard. Semidefinite programming
(SDP) relaxations of such quadratic programs have been used to design approximation algorithms with guaranteed performance
ratios for various MAX-GP problems. Based on several earlier results, we present an improved rounding method using an SDP
relaxation, and establish improved approximation ratios for several MAX-GP problems, including Dense-Subgraph, Max-Cut, Max-Not-Cut,
and Max-Vertex-Cover.
Received: March 10, 2000 / Accepted: July 13, 2001?Published online February 14, 2002 相似文献
13.
On the complexity of a class of combinatorial optimization problems with uncertainty 总被引:2,自引:0,他引:2
Igor Averbakh 《Mathematical Programming》2001,90(2):263-272
We consider a robust (minmax-regret) version of the problem of selecting p elements of minimum total weight out of a set of m elements with uncertainty in weights of the elements. We present a polynomial algorithm with the order of complexity O((min {p,m-p})2
m) for the case where uncertainty is represented by means of interval estimates for the weights. We show that the problem is
NP-hard in the case of an arbitrary finite set of possible scenarios, even if there are only two possible scenarios. This
is the first known example of a robust combinatorial optimization problem that is NP-hard in the case of scenario-represented
uncertainty but is polynomially solvable in the case of the interval representation of uncertainty.
Received: July 1998 / Accepted: May 2000?Published online March 22, 2001 相似文献
14.
Optimality conditions for nonconvex semidefinite programming 总被引:9,自引:0,他引:9
Anders Forsgren 《Mathematical Programming》2000,88(1):105-128
This paper concerns nonlinear semidefinite programming problems for which no convexity assumptions can be made. We derive
first- and second-order optimality conditions analogous to those for nonlinear programming. Using techniques similar to those
used in nonlinear programming, we extend existing theory to cover situations where the constraint matrix is structurally sparse.
The discussion covers the case when strict complementarity does not hold. The regularity conditions used are consistent with
those of nonlinear programming in the sense that the conventional optimality conditions for nonlinear programming are obtained
when the constraint matrix is diagonal.
Received: May 15, 1998 / Accepted: April 12, 2000?Published online May 12, 2000 相似文献
15.
16.
Polynomiality of an inexact infeasible interior point algorithm for semidefinite programming 总被引:3,自引:0,他引:3
In this paper we present a primal-dual inexact infeasible interior-point algorithm for semidefinite programming problems (SDP). This algorithm allows the use of search directions that are calculated from the defining linear system with only moderate accuracy, and does not require feasibility to be maintained even if the initial iterate happened to be a feasible solution of the problem. Under a mild assumption on the inexactness, we show that the algorithm can find an -approximate solution of an SDP in O(n2ln(1/)) iterations. This bound of our algorithm is the same as that of the exact infeasible interior point algorithms proposed by Y. Zhang.Research supported in part by the Singapore-MIT alliance, and NUS Academic Research Grant R-146-000-032-112.Mathematics Subject Classification (1991): 90C05, 90C30, 65K05 相似文献
17.
The many facets of linear programming 总被引:1,自引:0,他引:1
Michael J. Todd 《Mathematical Programming》2002,91(3):417-436
We examine the history of linear programming from computational, geometric, and complexity points of view, looking at simplex,
ellipsoid, interior-point, and other methods.
Received: June 22, 2000 / Accepted: April 4, 2001?Published online October 2, 2001 相似文献
18.
Primal-Dual Interior-Point Algorithms for Semidefinite Optimization Based on a Simple Kernel Function 总被引:3,自引:0,他引:3
Interior-point methods (IPMs) for semidefinite optimization (SDO) have been studied intensively, due to their polynomial complexity
and practical efficiency. Recently, J. Peng et al. introduced so-called self-regular kernel (and barrier) functions and designed
primal-dual interior-point algorithms based on self-regular proximities for linear optimization (LO) problems. They also extended
the approach for LO to SDO. In this paper we present a primal-dual interior-point algorithm for SDO problems based on a simple
kernel function which was first presented at the Proceedings of Industrial Symposium and Optimization Day, Australia, November 2002; the function is not self-regular. We derive the complexity analysis for algorithms based on this
kernel function, both with large- and small-updates. The complexity bounds are
and
, respectively, which are as good as those in the linear case.
Mathematics Subject Classifications (2000) 90C22, 90C31. 相似文献
19.
Primal-dual interior-point methods (IPMs) have shown their power in solving large classes of optimization problems. However,
at present there is still a gap between the practical behavior of these algorithms and their theoretical worst-case complexity
results, with respect to the strategies of updating the duality gap parameter in the algorithm. The so-called small-update
IPMs enjoy the best known theoretical worst-case iteration bound, but work very poorly in practice. To the contrary, the so-called
large-update IPMs have superior practical performance but with relatively weaker theoretical results. In this paper we discuss
the new algorithmic variants and improved complexity results with respect to the new family of Self-Regular proximity based
IPMs for Linear Optimization problems, and their generalizations to Conic and Semidefinite Optimization
This research was supported by the MITACS project “New IPMs and Software for Convex Conic-Linear Optimization and Their Application
to Solve VLSI Circuit Layout Problems”, by an NSERC discovery grant, and the CRC program. The first author would also like
to thank the Iranian Ministry of Science, Research and Technology for supporting his research. 相似文献
20.
This note studies
A
, a condition number used in the linear programming algorithm of Vavasis and Ye [14] whose running time depends only on the
constraint matrix A∈ℝ
m×n
, and (A), a variant of another condition number due to Ye [17] that also arises in complexity analyses of linear programming problems.
We provide a new characterization of
A
and relate
A
and (A). Furthermore, we show that if A is a standard Gaussian matrix, then E(ln
A
)=O(min{mlnn,n}). Thus, the expected running time of the Vavasis-Ye algorithm for linear programming problems is bounded by a polynomial
in m and n for any right-hand side and objective coefficient vectors when A is randomly generated in this way. As a corollary of the close relation between
A
and (A), we show that the same bound holds for E(ln(A)).
Received: September 1998 / Accepted: September 2000?Published online January 17, 2001 相似文献