首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes an infeasible interior-point algorithm with full Nesterov-Todd (NT) steps for semidefinite programming (SDP). The main iteration consists of a feasibility step and several centrality steps. First we present a full NT step infeasible interior-point algorithm based on the classic logarithmical barrier function. After that a specific kernel function is introduced. The feasibility step is induced by this kernel function instead of the classic logarithmical barrier function. This kernel function has a finite value on the boundary. The result of polynomial complexity, O(nlogn/ε), coincides with the best known one for infeasible interior-point methods.  相似文献   

2.
针对半定规划的宽邻域不可行内点算法, 将牛顿法和预估校正法进行结合, 构造出适当的迭代方向, 提出一个修正的半定规划宽邻域不可行内点算法, 并在适当的假设条件下, 证明了该算法具有O(\sqrt{n}L)的迭代复杂界.最后利用Matlab编程, 给出了基于KM方向和NT方向的数值实验结果.  相似文献   

3.
We propose a new full-Newton step infeasible interior-point algorithm for monotone linear complementarity problems based on a simple locally-kernel function. The algorithm uses the simple locally-kernel function to determine the search directions and define the neighborhood of central path. Two types of full-Newton steps are used, feasibility step and centering step. The algorithm starts from strictly feasible iterates of a perturbed problem, on its central path, and feasibility steps find strictly feasible iterates for the next perturbed problem. By using centering steps for the new perturbed problem, we obtain strictly feasible iterates close enough to the central path of the new perturbed problem. The procedure is repeated until an ?-approximate solution is found. We analyze the algorithm and obtain the complexity bound, which coincides with the best-known result for monotone linear complementarity problems.  相似文献   

4.
An infeasible interior-point method (IIPM) for solving linear optimization problems based on a kernel function with trigonometric barrier term is analysed. In each iteration, the algorithm involves a feasibility step and several centring steps. The centring step is based on classical Newton’s direction, while we used a kernel function with trigonometric barrier term in the algorithm to induce the feasibility step. The complexity result coincides with the best-known iteration bound for IIPMs. To our knowledge, this is the first full-Newton step IIPM based on a kernel function with trigonometric barrier term.  相似文献   

5.
《Optimization》2012,61(7):1577-1591
We present an infeasible interior-point algorithm for symmetric linear complementarity problem based on modified Nesterov–Todd directions by using Euclidean Jordan algebras. The algorithm decreases the duality gap and the feasibility residual at the same rate. In this algorithm, we construct strictly feasible iterates for a sequence of perturbations of the given problem. Each main iteration of the algorithm consists of a feasibility step and a number of centring steps. The starting point in the first iteration is strictly feasible for a perturbed problem. The feasibility steps lead to a strictly feasible iterate for the next perturbed problem. By using centring steps for the new perturbed problem, a strictly feasible iterate is obtained to be close to the central path of the new perturbed problem. Furthermore, giving a complexity analysis of the algorithm, we derive the currently best-known iteration bound for infeasible interior-point methods.  相似文献   

6.
Kernel functions play an important role in the design and analysis of primal-dual interior-point algorithms. They are not only used for determining the search directions but also for measuring the distance between the given iterate and the μ-center for the algorithms. In this paper we present a unified kernel function approach to primal-dual interior-point algorithms for convex quadratic semidefinite optimization based on the Nesterov and Todd symmetrization scheme. The iteration bounds for large- and small-update methods obtained are analogous to the linear optimization case. Moreover, this unifies the analysis for linear, convex quadratic and semidefinite optimizations.  相似文献   

7.
We present a unified analysis for a class of long-step primal-dual path-following algorithms for semidefinite programming whose search directions are obtained through linearization of the symmetrized equation of the central pathH P (XS) [PXSP –1 + (PXSP –1) T ]/2 = I, introduced by Zhang. At an iterate (X,S), we choose a scaling matrixP from the class of nonsingular matricesP such thatPXSP –1 is symmetric. This class of matrices includes the three well-known choices, namely:P = S 1/2 andP = X –1/2 proposed by Monteiro, and the matrixP corresponding to the Nesterov—Todd direction. We show that within the class of algorithms studied in this paper, the one based on the Nesterov—Todd direction has the lowest possible iteration-complexity bound that can provably be derived from our analysis. More specifically, its iteration-complexity bound is of the same order as that of the corresponding long-step primal-dual path-following algorithm for linear programming introduced by Kojima, Mizuno and Yoshise. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.Corresponding author.This author's research is supported in part by the National Science Foundation under grants INT-9600343 and CCR-9700448 and the Office of Naval Research under grant N00014-94-1-0340.This author's research was supported in part by DOE DE-FG02-93ER25171-A001.  相似文献   

8.
In this paper, we introduce a vector-valued Tikhonov-type regularization algorithm for an extended-valued multiobjective optimization problem. Under some mild conditions, we prove that any sequence generated by this algorithm converges to a weak Pareto optimal solution of the multiobjective optimization problem. Our results improve and generalize some known results.  相似文献   

9.
In 1980, Brézis[6], using the technique of dividing the total space into two parts, proved the embedding theorem of limiting case which is very important in applications. In 1982, Ding Xiaqi improved the proof given in [6], by using of the technique of dividing the total space into three parts. In this paper, using the technique of dividing the total space into three parts, the author proves uniformly the results obtained by Ding[3,4], and gives an embedding theorem of limiting case including (Lemma 2.2). And he also gives two kinds of examples, applying the embedding theorems (limiting case and non-limiting case) and the interpolation theorems. These examples are the singular perturbation problems in the sense of Lions[1] (for the definition of singular perturbation, see [1], Introduction). But the singular solutionU e converges uniformly to the limit solution (degenerate)U, ase0.  相似文献   

10.
Adler and Monteiro (1992) developed a parametric analysis approach that is naturally related to the geometry of the linear program. This approach is based on the availability of primal and dual optimal solutions satisfying strong complementarity. In this paper, we develop an alternative geometric approach for parametric analysis which does not require the strong complementarity condition. This parametric analysis approach is used to develop range and marginal analysis techniques which are suitable for interior point methods. Two approaches are developed, namely the LU factorization approach and the affine scaling approach. Presented at the ORSA/TIMS, Nashville, TN, USA, May 1991. Supported by the National Science Foundation (NSF) under Grant No. DDM-9109404 and Grant No. DMI-9496178. This work was done while the author was a faculty member of the Systems and Industrial Engineering Department at The University of Arizona. Supported in part by the GTE Laboratories and the National Science Foundation (NSF) under Grant No. CCR-9019469.  相似文献   

11.
Interior-point methods are among the most efficient approaches for solving large-scale nonlinear programming problems. At the core of these methods, highly ill-conditioned symmetric saddle-point problems have to be solved. We present combinatorial methods to preprocess these matrices in order to establish more favorable numerical properties for the subsequent factorization. Our approach is based on symmetric weighted matchings and is used in a sparse direct LDL T factorization method where the pivoting is restricted to static supernode data structures. In addition, we will dynamically expand the supernode data structure in cases where additional fill-in helps to select better numerical pivot elements. This technique can be seen as an alternative to the more traditional threshold pivoting techniques. We demonstrate the competitiveness of this approach within an interior-point method on a large set of test problems from the CUTE and COPS sets, as well as large optimal control problems based on partial differential equations. The largest nonlinear optimization problem solved has more than 12 million variables and 6 million constraints.  相似文献   

12.
Randomized algorithms provide solutions to two ubiquitous problems: (1) the distributed calculation of a principal component analysis or singular value decomposition of a highly rectangular matrix, and (2) the distributed calculation of a low-rank approximation (in the form of a singular value decomposition) to an arbitrary matrix. Carefully honed algorithms yield results that are uniformly superior to those of the stock, deterministic implementations in Spark (the popular platform for distributed computation); in particular, whereas the stock software will without warning return left singular vectors that are far from numerically orthonormal, a significantly burnished randomized implementation generates left singular vectors that are numerically orthonormal to nearly the machine precision.  相似文献   

13.
Let Ω be a bounded domain in , we prove the singular Moser-Trudinger embedding: if and only if where and . We will also study the corresponding critical exponent problem.  相似文献   

14.
15.
This paper presents the convergence proof and complexity analysis of an interior-point framework that solves linear programming problems by dynamically selecting and adding relevant inequalities. First, we formulate a new primal–dual interior-point algorithm for solving linear programmes in non-standard form with equality and inequality constraints. The algorithm uses a primal–dual path-following predictor–corrector short-step interior-point method that starts with a reduced problem without any inequalities and selectively adds a given inequality only if it becomes active on the way to optimality. Second, we prove convergence of this algorithm to an optimal solution at which all inequalities are satisfied regardless of whether they have been added by the algorithm or not. We thus provide a theoretical foundation for similar schemes already used in practice. We also establish conditions under which the complexity of such algorithm is polynomial in the problem dimension and address remaining limitations without these conditions for possible further research.  相似文献   

16.
A unified approach to the analysis of bilinear algorithms is outlined with the class of algorithms for matrix multiplication as the main example. Some comments are given about possible applications to the convolution of vectors and the Direct Sum problem. A new proof that at least seven multiplications are needed in the 2 × 2 case is given as an illustration of the general approach.  相似文献   

17.
In this paper we describe a computational study of block principal pivoting (BP) and interior-point predictor-corrector (PC) algorithms for the solution of large-scale linear complementarity problems (LCP) with symmetric positive definite matrices. This study shows that these algorithms are in general quite appropriate for this type of LCPs. The BP algorithm does not seem to be sensitive to bad scaling and degeneracy of the unique solution of the LCP, while these aspects have some effect on the performance of the PC algorithm. On the other hand, the BP method has not performed well in two LCPs with ill-conditioned matrices for which the PC algorithm has behaved quite well.A hybrid algorithm combining these two techniques is also introduced and seems to be the most robust procedure for the solution of large-scale LCPs with symmetric positive definite matrices.Support of this work has been provided by the Instituto de Telecomunicações.  相似文献   

18.
Controlled Perturbation (CP, for short) is an approach to obtaining efficient and robust implementations of a large class of geometric algorithms using the computational speed of multiple precision floating point arithmetic (compared to exact arithmetic), while bypassing the precision problems by perturbation. It also allows algorithms to be written without consideration of degenerate cases. CP replaces the input objects by a set of randomly perturbed (moved, scaled, stretched, etc.) objects and protects the evaluation of geometric predicates by guards. The execution is aborted if a guard indicates that the evaluation of a predicate with floating point arithmetic may return an incorrect result. If the execution is aborted, the algorithm is rerun on a new perturbation and maybe with a higher precision of the floating point arithmetic. If the algorithm runs to completion, it returns the correct output for the perturbed input.The analysis of CP algorithms relates various parameters: the perturbation amount, the arithmetic precision, the range of input values, and the number of input objects. We present a general methodology for analyzing CP algorithms. It is powerful enough to analyze all geometric predicates that are formulated as signs of polynomials.  相似文献   

19.
k-平均问题是计算机科学和组合优化领域的经典问题之一.k-平均聚类作为最受重视而且最简单易懂的一种聚类分析方法流行于数据挖掘领域.k-平均问题可描述为:给定n个元素的观测集,其中每个观测点都是d维实向量,目标是把这n个观测点划分到k(≤n)个集合中,使得所有集合中的点到对应的聚类中心的距离的平方和最小,其中一个集合的聚类中心指的是该集合中所有观测点的均值.k-平均问题在理论上是NP-难的,但有高效的启发式算法,广泛应用在市场划分、机器视觉、地质统计学、天文学和农业等实际背景中.随着实际问题中遇到的k-平均问题更加复杂,数据量更加庞大,还需学者进行更深一步的研究.罗列出k-平均问题及其诸多变形及推广问题的经典算法,并总结k-平均中尚待研究的若干问题.  相似文献   

20.
The implicit Cholesky algorithm has been developed by several authors during the last 10 years but under different names. We identify the algorithm with a special version of Rutishauser's LR algorithm. Intermediate quantities in the transformation furnish several attractive approximations to the smallest singular value. The paper extols the advantages of using shifts with the algorithm. The nonorthogonal transformations improve accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号