共查询到10条相似文献,搜索用时 187 毫秒
1.
Primal-dual interior-point methods (IPMs) have shown their power in solving large classes of optimization problems. However,
at present there is still a gap between the practical behavior of these algorithms and their theoretical worst-case complexity
results, with respect to the strategies of updating the duality gap parameter in the algorithm. The so-called small-update
IPMs enjoy the best known theoretical worst-case iteration bound, but work very poorly in practice. To the contrary, the so-called
large-update IPMs have superior practical performance but with relatively weaker theoretical results. In this paper we discuss
the new algorithmic variants and improved complexity results with respect to the new family of Self-Regular proximity based
IPMs for Linear Optimization problems, and their generalizations to Conic and Semidefinite Optimization
This research was supported by the MITACS project “New IPMs and Software for Convex Conic-Linear Optimization and Their Application
to Solve VLSI Circuit Layout Problems”, by an NSERC discovery grant, and the CRC program. The first author would also like
to thank the Iranian Ministry of Science, Research and Technology for supporting his research. 相似文献
2.
For an edge-weighted graph G with n vertices and m edges, we present a new deterministic algorithm for computing a minimum k-way cut for k=3,4. The algorithm runs in O(n
k-1
F(n,m))=O(mn
k
log(n
2
/m)) time and O(n
2) space for k=3,4, where F(n,m) denotes the time bound required to solve the maximum flow problem in G. The bound for k=3 matches the current best deterministic bound ?(mn
3) for weighted graphs, but improves the bound ?(mn
3) to O(n
2
F(n,m))=O(min{mn
8/3,m
3/2
n
2}) for unweighted graphs. The bound ?(mn
4) for k=4 improves the previous best randomized bound ?(n
6) (for m=o(n
2)). The algorithm is then generalized to the problem of finding a minimum 3-way cut in a symmetric submodular system.
Received: April 1999 / Accepted: February 2000?Published online August 18, 2000 相似文献
3.
In this paper, motivated by the complexity results of Interior Point Methods (IPMs) for Linear Optimization (LO) based on kernel functions, we present a polynomial time IPM for solving P.(a)-linear complementarity problem, using a new class of kernel functions. The special case of our new class was considered earlier for LO by Y. Q. Bai et al. in 2004. Using some appealing properties of the new class, we show that the iteration bound for IPMs matches the so far best known theoretical iteration bound for both large and small updates by choosing special values for the parameters of the new class. 相似文献
4.
Maziar Salahi Tamás Terlaky Guoqing Zhang 《Computational Optimization and Applications》2006,33(2-3):157-185
Primal-Dual Interior-Point Methods (IPMs) have shown their power in solving large classes of optimization problems. In this
paper a self-regular proximity based Infeasible Interior Point Method (IIPM) is proposed for linear optimization problems.
First we mention some interesting properties of a specific self-regular proximity function, studied recently by Peng and Terlaky,
and use it to define infeasible neighborhoods. These simple but interesting properties of the proximity function indicate
that, when the current iterate is in a large neighborhood of the central path, large-update IIPMs emerge as the only natural
choice. Then, we apply these results to design a specific self-regularity based dynamic large-update IIPM in large neighborhood.
The new dynamic IIPM always takes large-updates and does not utilize any inner iteration to get centered. An
worst-case iteration bound of the algorithm is established. Finally, we report the main results of our computational experiments. 相似文献
5.
M. Reza Peyghami 《PAMM》2007,7(1):2060081-2060082
One of the main ingredients of interior point methods is the proximity functions to measure the distance of the iterates from the central path of linear optimization problems. In this paper, an interior point method for solving P*(κ)-linear complementarity problem, κ ≥ 0, is proposed. For this version, we use a new class of proximity functions induced by new kernel functions. Using some mild and easy to check conditions, we show that the large-update primal-dual interior point methods for solving P*(κ)-linear complementarity problem enjoy the so far best worst case theoretical complexity, namely O (κ √n log n log n /ε) iteration bound, with special choices of the parameters p, q ≥ 1. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) 相似文献
6.
Behrouz Kheirfam 《Numerical Algorithms》2012,61(4):659-680
In this paper we propose primal-dual interior-point algorithms for semidefinite optimization problems based on a new kernel function with a trigonometric barrier term. We show that the iteration bounds are $O(\sqrt{n}\log(\frac{n}{\epsilon}))$ for small-update methods and $O(n^{\frac{3}{4}}\log(\frac{n}{\epsilon}))$ for large-update, respectively. The resulting bound is better than the classical kernel function. For small-update, the iteration complexity is the best known bound for such methods. 相似文献
7.
Ming Wang Zhang 《数学学报(英文版)》2012,28(11):2313-2328
In this paper, we present a large-update interior-point algorithm for convex quadratic semi-definite optimization based on a new kernel function. The proposed function is strongly convex. It is not self-regular function and also the usual logarithmic function. The goal of this paper is to investigate such a kernel function and show that the algorithm has favorable complexity bound in terms of the elegant analytic properties of the kernel function. The complexity bound is shown to be $O\left( {\sqrt n \left( {\log n} \right)^2 \log \frac{n} {\varepsilon }} \right)$ . This bound is better than that by the classical primal-dual interior-point methods based on logarithmic barrier function and recent kernel functions introduced by some authors in optimization fields. Some computational results have been provided. 相似文献
8.
Yan Qin BAI Guo Qiang WANG 《数学学报(英文版)》2007,23(11):2027-2042
A class of polynomial primal-dual interior-point algorithms for second-order cone optimization based on a new parametric kernel function, with parameters p and q, is presented. Its growth term is between linear and quadratic. Some new tools for the analysis of the algorithms are proposed. The complexity bounds of O(√Nlog N log N/ε) for large-update methods and O(√Nlog N/ε) for smallupdate methods match the best known complexity bounds obtained for these methods. Numerical tests demonstrate the behavior of the algorithms for different results of the parameters p and q. 相似文献
9.
10.
Based on the authors’ previous work which established theoretical foundations of two, conceptual, successive convex relaxation
methods, i.e., the SSDP (Successive Semidefinite Programming) Relaxation Method and the SSILP (Successive Semi-Infinite Linear Programming)
Relaxation Method, this paper proposes their implementable variants for general quadratic optimization problems. These problems
have a linear objective function c
T
x to be maximized over a nonconvex compact feasible region F described by a finite number of quadratic inequalities. We introduce two new techniques, “discretization” and “localization,”
into the SSDP and SSILP Relaxation Methods. The discretization technique makes it possible to approximate an infinite number
of semi-infinite SDPs (or semi-infinite LPs) which appeared at each iteration of the original methods by a finite number of
standard SDPs (or standard LPs) with a finite number of linear inequality constraints. We establish:?•Given any open convex set U containing F, there is an implementable discretization of the SSDP (or SSILP) Relaxation Method
which generates a compact convex set C such that F⊆C⊆U in a finite number of iterations.?The localization technique is for the cases where we are only interested in upper bounds on the optimal objective value (for
a fixed objective function vector c) but not in a global approximation of the convex hull of F. This technique allows us to generate a convex relaxation of F that is accurate only in certain directions in a neighborhood of the objective direction c. This cuts off redundant work to make the convex relaxation accurate in unnecessary directions. We establish:?•Given any positive number ε, there is an implementable localization-discretization of the SSDP (or SSILP) Relaxation Method
which generates an upper bound of the objective value within ε of its maximum in a finite number of iterations.
Received: June 30, 1998 / Accepted: May 18, 2000?Published online September 20, 2000 相似文献