共查询到20条相似文献,搜索用时 10 毫秒
1.
Kim-Chuan Toh 《Mathematical Programming》2008,112(1):221-254
We propose primal–dual path-following Mehrotra-type predictor–corrector methods for solving convex quadratic semidefinite
programming (QSDP) problems of the form: , where is a self-adjoint positive semidefinite linear operator on , b∈R
m
, and is a linear map from to R
m
. At each interior-point iteration, the search direction is computed from a dense symmetric indefinite linear system (called
the augmented equation) of dimension m + n(n + 1)/2. Such linear systems are typically very large and can only be solved by iterative methods. We propose three classes
of preconditioners for the augmented equation, and show that the corresponding preconditioned matrices have favorable asymptotic
eigenvalue distributions for fast convergence under suitable nondegeneracy assumptions. Numerical experiments on a variety
of QSDPs with n up to 1600 are performed and the computational results show that our methods are efficient and robust.
Research supported in part by Academic Research Grant R146-000-076-112. 相似文献
2.
Numerical Algorithms - The primal–dual hybrid gradient (PDHG) method has been widely used for solving saddle point problems emerged in imaging processing. In particular, PDHG can be used to... 相似文献
3.
In this paper we combine partial updating and an adaptation of Anstreicher's safeguarded linesearch of the primal—dual potential function with Kojima, Mizuno and Yoshise's potential reduction algorithm for the linear complementarity problem to obtain an O(n
3
L) algorithm for convex quadratic programming. Our modified algorithm is a long step method that requires at most O(
L) steps.This research was supported in part by ONR Contract N-00014-87-K0214, NSF Grants DMS-85-12277 and DMS-91-06195. 相似文献
4.
5.
This paper is concerned with a primal–dual interior point method for solving nonlinear semidefinite programming problems. The method consists of the outer iteration (SDPIP) that finds a KKT point and the inner iteration (SDPLS) that calculates an approximate barrier KKT point. Algorithm SDPLS uses a commutative class of Newton-like directions for the generation of line search directions. By combining the primal barrier penalty function and the primal–dual barrier function, a new primal–dual merit function is proposed. We prove the global convergence property of our method. Finally some numerical experiments are given. 相似文献
6.
Mathematical Programming - In this paper, we develop an algorithm to efficiently solve risk-averse optimization problems posed in reflexive Banach space. Such problems often arise in many practical... 相似文献
7.
8.
Renato D. C. Monteiro 《Mathematical Programming》1994,64(1-3):123-147
In this paper, we study the global convergence of a large class of primal—dual interior point algorithms for solving the linearly constrained convex programming problem. The algorithms in this class decrease the value of a primal—dual potential function and hence belong to the class of so-called potential reduction algorithms. An inexact line search based on Armijo stepsize rule is used to compute the stepsize. The directions used by the algorithms are the same as the ones used in primal—dual path following and potential reduction algorithms and a very mild condition on the choice of the centering parameter is assumed. The algorithms always keep primal and dual feasibility and, in contrast to the polynomial potential reduction algorithms, they do not need to drive the value of the potential function towards — in order to converge. One of the techniques used in the convergence analysis of these algorithms has its root in nonlinear unconstrained optimization theory.Research supported in part by NSF Grant DDM-9109404. 相似文献
9.
Block coordinate update (BCU) methods enjoy low per-update computational complexity because every time only one or a few block variables would need to be updated among possibly a large number of blocks. They are also easily parallelized and thus have been particularly popular for solving problems involving large-scale dataset and/or variables. In this paper, we propose a primal–dual BCU method for solving linearly constrained convex program with multi-block variables. The method is an accelerated version of a primal–dual algorithm proposed by the authors, which applies randomization in selecting block variables to update and establishes an O(1 / t) convergence rate under convexity assumption. We show that the rate can be accelerated to \(O(1/t^2)\) if the objective is strongly convex. In addition, if one block variable is independent of the others in the objective, we then show that the algorithm can be modified to achieve a linear rate of convergence. The numerical experiments show that the accelerated method performs stably with a single set of parameters while the original method needs to tune the parameters for different datasets in order to achieve a comparable level of performance. 相似文献
10.
We give some properties and uses of a primal–dual operation on sets that appear in the closed convex relaxation process (Hiriart-Urruty et al. in Rev. Mat. Iberoam. 27(2):449–474, 2011; López and Volle in J. Conv. Anal. 17(3–4):1057–1075, 2010). Applications are provided concerning a class of relaxed minimization problems in the frame of the so called B-regularization theory. Special attention is paid to the case when the initial problem admits optimal solutions under compactness assumptions. 相似文献
11.
12.
We propose a new class of incremental primal–dual techniques for solving nonlinear programming problems with special structure. Specifically, the objective functions of the problems are sums of independent nonconvex continuously differentiable terms minimized subject to a set of nonlinear constraints for each term. The technique performs successive primal–dual increments for each decomposition term of the objective function. The primal–dual increments are calculated by performing one Newton step towards the solution of the Karush–Kuhn–Tucker optimality conditions of each subproblem associated with each objective function term. We show that the resulting incremental algorithm is q-linearly convergent under mild assumptions for the original problem. 相似文献
13.
14.
Quadratic Convex Reformulation (QCR) is a technique that was originally proposed for quadratic 0–1 programs, and then extended to various other problems. It is used to convert non-convex instances into convex ones, in such a way that the bound obtained by solving the continuous relaxation of the reformulated instance is as strong as possible. In this paper, we focus on the case of quadratically constrained quadratic 0–1 programs. The variant of QCR previously proposed for this case involves the addition of a quadratic number of auxiliary continuous variables. We show that, in fact, at most one additional variable is needed. Some computational results are also presented. 相似文献
15.
A variational inequality system, which is a generalization of the saddle point problem, is considered. The system does not possess monotonicity properties and the feasible set is unbounded in general. To solve the problem we propose a completely implementable iterative scheme, whose convergence is proved under certain coercivity type conditions. 相似文献
16.
《Journal of Computational and Applied Mathematics》2002,138(1):127-150
We study the problem of minimizing a sum of Euclidean norms. This nonsmooth optimization problem arises in many different kinds of modern scientific applications. In this paper we first transform this problem and its dual problem into a system of strongly semismooth equations, and give some uniqueness theorems for this problem. We then present a primal–dual algorithm for this problem by solving this system of strongly semismooth equations. Preliminary numerical results are reported, which show that this primal–dual algorithm is very promising. 相似文献
17.
Maria Gonzalez-Lima Hua Wei Henry Wolkowicz 《Computational Optimization and Applications》2009,44(2):213-247
This paper studies a primal–dual interior/exterior-point path-following approach for linear programming that is motivated
on using an iterative solver rather than a direct solver for the search direction. We begin with the usual perturbed primal–dual
optimality equations. Under nondegeneracy assumptions, this nonlinear system is well-posed, i.e. it has a nonsingular Jacobian
at optimality and is not necessarily ill-conditioned as the iterates approach optimality. Assuming that a basis matrix (easily
factorizable and well-conditioned) can be found, we apply a simple preprocessing step to eliminate both the primal and dual
feasibility equations. This results in a single bilinear equation that maintains the well-posedness property. Sparsity is
maintained. We then apply either a direct solution method or an iterative solver (within an inexact Newton framework) to solve
this equation. Since the linearization is well posed, we use affine scaling and do not maintain nonnegativity once we are
close enough to the optimum, i.e. we apply a change to a pure Newton step technique. In addition, we correctly identify some of the primal and dual variables that converge to 0 and delete them (purify
step).
We test our method with random nondegenerate problems and problems from the Netlib set, and we compare it with the standard
Normal Equations NEQ approach. We use a heuristic to find the basis matrix. We show that our method is efficient for large,
well-conditioned problems. It is slower than NEQ on ill-conditioned problems, but it yields higher accuracy solutions. 相似文献
18.
《European Journal of Operational Research》1999,116(3):629-639
This paper presents a method of sensitivity analysis on the cost coefficients and the right-hand sides for most variants of the primal–dual interior point method. We first define an ε-optimal solution to describe the characteristics of the final solution obtained by the primal–dual interior point method. Then an ε-sensitivity analysis is defined to determine the characteristic region where the final solution remains the ε-optimal solution as a cost coefficient or a right-hand side changes. To develop the method of ε-sensitivity analysis, we first derive the expressions for the final solution from data which are commonly maintained in most variants of the primal–dual interior point method. Then we extract the characteristic regions on the cost coefficients and the right-hand sides by manipulating the mathematical expressions for the final solution. Finally, we show that in the nondegenerate case, the characteristic regions obtained by ε-sensitivity analysis are convergent to those obtained by sensitivity analysis in the simplex algorithm. 相似文献
19.
《European Journal of Operational Research》2002,143(2):234-256
We propose a new class of primal–dual methods for linear optimization (LO). By using some new analysis tools, we prove that the large-update method for LO based on the new search direction has a polynomial complexity of O(n4/(4+ρ)log(n/ε)) iterations, where ρ∈[0,2] is a parameter used in the system defining the search direction. If ρ=0, our results reproduce the well-known complexity of the standard primal–dual Newton method for LO. At each iteration, our algorithm needs only to solve a linear equation system. An extension of the algorithms to semidefinite optimization is also presented. 相似文献