首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Mathematical Programming - In this paper, we develop an algorithm to efficiently solve risk-averse optimization problems posed in reflexive Banach space. Such problems often arise in many practical...  相似文献   

2.
3.
We propose a new modified primal–dual proximal best approximation method for solving convex not necessarily differentiable optimization problems. The novelty of the method relies on introducing memory by taking into account iterates computed in previous steps in the formulas defining current iterate. To this end we consider projections onto intersections of halfspaces generated on the basis of the current as well as the previous iterates. To calculate these projections we are using recently obtained closed-form expressions for projectors onto polyhedral sets. The resulting algorithm with memory inherits strong convergence properties of the original best approximation proximal primal–dual algorithm. Additionally, we compare our algorithm with the original (non-inertial) one with the help of the so called attraction property defined below. Extensive numerical experimental results on image reconstruction problems illustrate the advantages of including memory into the original algorithm.  相似文献   

4.
We study the problem of minimizing a sum of Euclidean norms. This nonsmooth optimization problem arises in many different kinds of modern scientific applications. In this paper we first transform this problem and its dual problem into a system of strongly semismooth equations, and give some uniqueness theorems for this problem. We then present a primal–dual algorithm for this problem by solving this system of strongly semismooth equations. Preliminary numerical results are reported, which show that this primal–dual algorithm is very promising.  相似文献   

5.
6.
We propose primal–dual path-following Mehrotra-type predictor–corrector methods for solving convex quadratic semidefinite programming (QSDP) problems of the form: , where is a self-adjoint positive semidefinite linear operator on , bR m , and is a linear map from to R m . At each interior-point iteration, the search direction is computed from a dense symmetric indefinite linear system (called the augmented equation) of dimension m + n(n + 1)/2. Such linear systems are typically very large and can only be solved by iterative methods. We propose three classes of preconditioners for the augmented equation, and show that the corresponding preconditioned matrices have favorable asymptotic eigenvalue distributions for fast convergence under suitable nondegeneracy assumptions. Numerical experiments on a variety of QSDPs with n up to 1600 are performed and the computational results show that our methods are efficient and robust. Research supported in part by Academic Research Grant R146-000-076-112.  相似文献   

7.
8.
In this paper, we study the global convergence of a large class of primal—dual interior point algorithms for solving the linearly constrained convex programming problem. The algorithms in this class decrease the value of a primal—dual potential function and hence belong to the class of so-called potential reduction algorithms. An inexact line search based on Armijo stepsize rule is used to compute the stepsize. The directions used by the algorithms are the same as the ones used in primal—dual path following and potential reduction algorithms and a very mild condition on the choice of the centering parameter is assumed. The algorithms always keep primal and dual feasibility and, in contrast to the polynomial potential reduction algorithms, they do not need to drive the value of the potential function towards — in order to converge. One of the techniques used in the convergence analysis of these algorithms has its root in nonlinear unconstrained optimization theory.Research supported in part by NSF Grant DDM-9109404.  相似文献   

9.
We give a necessary condition for the existence of a feasible solution for the transportation problem through a set of admissible cells, and an algorithm to find a set of admissible cells that satisfies the necessary condition. Either there exists a feasible solution through the admissible cells (which is therefore optimal since the complementary slackness conditions hold) or we could begin using the primal–dual algorithm (PDA) at this point. Our approach has two important advantages: Our O(mn) procedure for updating dual variables takes much less computing time than any procedure for solving a maximum flow problem in the primal phase of the PDA. We are never concerned by the degeneracy problem as we are not seeking basic solutions, but admissible cells. An example is presented for illustrating our approach. We finally provide computational results for a set of 30 randomly generated instances. Comparison of our method with the PDA reveals a real speed up.  相似文献   

10.
11.
12.
A primal-dual path-following algorithm that applies directly to a linear program of the form, min{c t xAx = b, Hx u, x 0,x n }, is presented. This algorithm explicitly handles upper bounds, generalized upper bounds, variable upper bounds, and block diagonal structure. We also show how the structure of time-staged problems and network flow problems can be exploited, especially on a parallel computer. Finally, using our algorithm, we obtain a complexity bound of O( ds 2 log(nk)) fortransportation problems withs origins,d destinations (s <d), andn arcs, wherek is the maximum absolute value of the input data.This research was supported in part by NSF Grants DMS-85-12277 and CDR-84-21402 and by ONR Contract N00014-87-K-0214.  相似文献   

13.
This paper is concerned with a primal–dual interior point method for solving nonlinear semidefinite programming problems. The method consists of the outer iteration (SDPIP) that finds a KKT point and the inner iteration (SDPLS) that calculates an approximate barrier KKT point. Algorithm SDPLS uses a commutative class of Newton-like directions for the generation of line search directions. By combining the primal barrier penalty function and the primal–dual barrier function, a new primal–dual merit function is proposed. We prove the global convergence property of our method. Finally some numerical experiments are given.  相似文献   

14.
15.
Interior-point methods in augmented form for linear and convex quadratic programming require the solution of a sequence of symmetric indefinite linear systems which are used to derive search directions. Safeguards are typically required in order to handle free variables or rank-deficient Jacobians. We propose a consistent framework and accompanying theoretical justification for regularizing these linear systems. Our approach can be interpreted as a simultaneous proximal-point regularization of the primal and dual problems. The regularization is termedexact to emphasize that, although the problems are regularized, the algorithm recovers a solution of the original problem, for appropriate values of the regularization parameters.  相似文献   

16.
This paper studies a primal–dual interior/exterior-point path-following approach for linear programming that is motivated on using an iterative solver rather than a direct solver for the search direction. We begin with the usual perturbed primal–dual optimality equations. Under nondegeneracy assumptions, this nonlinear system is well-posed, i.e. it has a nonsingular Jacobian at optimality and is not necessarily ill-conditioned as the iterates approach optimality. Assuming that a basis matrix (easily factorizable and well-conditioned) can be found, we apply a simple preprocessing step to eliminate both the primal and dual feasibility equations. This results in a single bilinear equation that maintains the well-posedness property. Sparsity is maintained. We then apply either a direct solution method or an iterative solver (within an inexact Newton framework) to solve this equation. Since the linearization is well posed, we use affine scaling and do not maintain nonnegativity once we are close enough to the optimum, i.e. we apply a change to a pure Newton step technique. In addition, we correctly identify some of the primal and dual variables that converge to 0 and delete them (purify step). We test our method with random nondegenerate problems and problems from the Netlib set, and we compare it with the standard Normal Equations NEQ approach. We use a heuristic to find the basis matrix. We show that our method is efficient for large, well-conditioned problems. It is slower than NEQ on ill-conditioned problems, but it yields higher accuracy solutions.  相似文献   

17.
In elliptic cone optimization problems, we minimize a linear objective function over the intersection of an affine linear manifold with the Cartesian product of the so-called elliptic cones. We present some general classes of optimization problems that can be cast as elliptic cone programmes such as second-order cone programmes and circular cone programmes. We also describe some real-world applications of this class of optimization problems. We study and analyse the Jordan algebraic structure of the elliptic cones. Then, we present a glimpse of the duality theory associated with elliptic cone optimization. A primal–dual path-following interior-point algorithm is derived for elliptic cone optimization problems. We prove the polynomial convergence of the proposed algorithms by showing that the logarithmic barrier is a strongly self-concordant barrier. The numerical examples show the path-following algorithms are efficient.  相似文献   

18.
Ma  Feng  Bi  Yiming  Gao  Bin 《Numerical Algorithms》2019,82(2):641-662
Numerical Algorithms - The primal–dual hybrid gradient (PDHG) method has been widely used for solving saddle point problems emerged in imaging processing. In particular, PDHG can be used to...  相似文献   

19.
M. Volle 《TOP》2012,20(2):534-546
We give some properties and uses of a primal–dual operation on sets that appear in the closed convex relaxation process (Hiriart-Urruty et al. in Rev. Mat. Iberoam. 27(2):449–474, 2011; López and Volle in J. Conv. Anal. 17(3–4):1057–1075, 2010). Applications are provided concerning a class of relaxed minimization problems in the frame of the so called B-regularization theory. Special attention is paid to the case when the initial problem admits optimal solutions under compactness assumptions.  相似文献   

20.
We propose a new class of primal–dual methods for linear optimization (LO). By using some new analysis tools, we prove that the large-update method for LO based on the new search direction has a polynomial complexity of O(n4/(4+ρ)log(n/ε)) iterations, where ρ∈[0,2] is a parameter used in the system defining the search direction. If ρ=0, our results reproduce the well-known complexity of the standard primal–dual Newton method for LO. At each iteration, our algorithm needs only to solve a linear equation system. An extension of the algorithms to semidefinite optimization is also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号