首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper demonstrates a customized application of the classical proximal point algorithm (PPA) to the convex minimization problem with linear constraints. We show that if the proximal parameter in metric form is chosen appropriately, the application of PPA could be effective to exploit the simplicity of the objective function. The resulting subproblems could be easier than those of the augmented Lagrangian method (ALM), a benchmark method for the model under our consideration. The efficiency of the customized application of PPA is demonstrated by some image processing problems.  相似文献   

2.
In this paper, we propose and analyze an accelerated augmented Lagrangian method(denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k~2) while the convergence rate of the classical augmented Lagrangian method(ALM) is O(1/k). Numerical experiments on the linearly constrained l_1-l_2minimization problem are presented to demonstrate the effectiveness of AALM.  相似文献   

3.
We consider the augmented Lagrangian method (ALM) for constrained optimization problems in the presence of convex inequality and convex abstract constraints. We focus on the case where the Lagrangian sub-problems are solved up to approximate stationary points, with increasing accuracy. We analyze two different criteria of approximate stationarity for the sub-problems and we prove the global convergence to stationary points of ALM in both cases.  相似文献   

4.
Recently,an indefinite linearized augmented Lagrangian method(IL-ALM)was proposed for the convex programming problems with linear constraints.The IL-ALM differs from the linearized augmented Lagrangian method in that the augmented Lagrangian is linearized by adding an indefinite quadratic proximal term.But,it preserves the algorithmic feature of the linearized ALM and usually has the advantage to improve the performance.The IL-ALM is proved to be convergent from contraction perspective,but its convergence rate is still missing.This is mainly because that the indefinite setting destroys the structures when we directly employ the contraction frameworks.In this paper,we derive the convergence rate for this algorithm by using a different analysis.We prove that a worst-case O(1/t)convergence rate is still hold for this algorithm,where t is the number of iterations.Additionally we show that the customized proximal point algorithm can employ larger step sizes by proving its equivalence to the linearized ALM.  相似文献   

5.
In this paper, we design a numerical algorithm for solving a simple bilevel program where the lower level program is a nonconvex minimization problem with a convex set constraint. We propose to solve a combined problem where the first order condition and the value function are both present in the constraints. Since the value function is in general nonsmooth, the combined problem is in general a nonsmooth and nonconvex optimization problem. We propose a smoothing augmented Lagrangian method for solving a general class of nonsmooth and nonconvex constrained optimization problems. We show that, if the sequence of penalty parameters is bounded, then any accumulation point is a Karush-Kuch-Tucker (KKT) point of the nonsmooth optimization problem. The smoothing augmented Lagrangian method is used to solve the combined problem. Numerical experiments show that the algorithm is efficient for solving the simple bilevel program.  相似文献   

6.
In this paper, we propose and analyze an accelerated augmented Lagrangian method (denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k 2) while the convergence rate of the classical augmented Lagrangian method (ALM) is O(1/k). Numerical experiments on the linearly constrained l 1?l 2 minimization problem are presented to demonstrate the effectiveness of AALM.  相似文献   

7.
Computational Optimization and Applications - The augmented Lagrangian method (ALM) provides a benchmark for solving the canonical convex optimization problem with linear constraints. The direct...  相似文献   

8.
We propose a new fast algorithm for solving a TV-based image restoration problem. Our approach is based on merging subspace optimization methods into an augmented Lagrangian method. The proposed algorithm can be seen as a variant of the ALM (Augmented Lagrangian Method), and the convergence properties are analyzed from a DRS (Douglas-Rachford splitting) viewpoint. Experiments on a set of image restoration benchmark problems show that the proposed algorithm is a strong contender for the current state of the art methods.  相似文献   

9.
Given a set of corrupted data drawn from a union of multiple subspace, the subspace recovery problem is to segment the data into their respective subspace and to correct the possible noise simultaneously. Recently, it is discovered that the task can be characterized, both theoretically and numerically, by solving a matrix nuclear-norm and a ?2,1-mixed norm involved convex minimization problems. The minimization model actually has separable structure in both the objective function and constraint; it thus falls into the framework of the augmented Lagrangian alternating direction approach. In this paper, we propose and investigate an augmented Lagrangian algorithm. We split the augmented Lagrangian function and minimize the subproblems alternatively with one variable by fixing the other one. Moreover, we linearize the subproblem and add a proximal point term to easily derive the closed-form solutions. Global convergence of the proposed algorithm is established under some technical conditions. Extensive experiments on the simulated and the real data verify that the proposed method is very effective and faster than the sate-of-the-art algorithm LRR.  相似文献   

10.
Image recovery via total variation minimization and related problems   总被引:37,自引:0,他引:37  
Summary. We study here a classical image denoising technique introduced by L. Rudin and S. Osher a few years ago, namely the constrained minimization of the total variation (TV) of the image. First, we give results of existence and uniqueness and prove the link between the constrained minimization problem and the minimization of an associated Lagrangian functional. Then we describe a relaxation method for computing the solution, and give a proof of convergence. After this, we explain why the TV-based model is well suited to the recovery of some images and not of others. We eventually propose an alternative approach whose purpose is to handle the minimization of the minimum of several convex functionals. We propose for instance a variant of the original TV minimization problem that handles correctly some situations where TV fails. Received December 21, 1995 / Revised version February 26, 1996  相似文献   

11.
In this paper, we present a two-phase augmented Lagrangian method, called QSDPNAL, for solving convex quadratic semidefinite programming (QSDP) problems with constraints consisting of a large number of linear equality and inequality constraints, a simple convex polyhedral set constraint, and a positive semidefinite cone constraint. A first order algorithm which relies on the inexact Schur complement based decomposition technique is developed in QSDPNAL-Phase I with the aim of solving a QSDP problem to moderate accuracy or using it to generate a reasonably good initial point for the second phase. In QSDPNAL-Phase II, we design an augmented Lagrangian method (ALM) wherein the inner subproblem in each iteration is solved via inexact semismooth Newton based algorithms. Simple and implementable stopping criteria are designed for the ALM. Moreover, under mild conditions, we are able to establish the rate of convergence of the proposed algorithm and prove the R-(super)linear convergence of the KKT residual. In the implementation of QSDPNAL, we also develop efficient techniques for solving large scale linear systems of equations under certain subspace constraints. More specifically, simpler and yet better conditioned linear systems are carefully designed to replace the original linear systems and novel shadow sequences are constructed to alleviate the numerical difficulties brought about by the crucial subspace constraints. Extensive numerical results for various large scale QSDPs show that our two-phase algorithm is highly efficient and robust in obtaining accurate solutions. The software reviewed as part of this submission was given the DOI (Digital Object Identifier)  https://doi.org/10.5281/zenodo.1206980.  相似文献   

12.
The alternating direction method of multipliers(ADMM)is a widely used method for solving many convex minimization models arising in signal and image processing.In this paper,we propose an inertial ADMM for solving a two-block separable convex minimization problem with linear equality constraints.This algorithm is obtained by making use of the inertial Douglas-Rachford splitting algorithm to the corresponding dual of the primal problem.We study the convergence analysis of the proposed algorithm in infinite-dimensional Hilbert spaces.Furthermore,we apply the proposed algorithm on the robust principal component analysis problem and also compare it with other state-of-the-art algorithms.Numerical results demonstrate the advantage of the proposed algorithm.  相似文献   

13.
A proximal-based decomposition method for convex minimization problems   总被引:10,自引:0,他引:10  
This paper presents a decomposition method for solving convex minimization problems. At each iteration, the algorithm computes two proximal steps in the dual variables and one proximal step in the primal variables. We derive this algorithm from Rockafellar's proximal method of multipliers, which involves an augmented Lagrangian with an additional quadratic proximal term. The algorithm preserves the good features of the proximal method of multipliers, with the additional advantage that it leads to a decoupling of the constraints, and is thus suitable for parallel implementation. We allow for computing approximately the proximal minimization steps and we prove that under mild assumptions on the problem's data, the method is globally convergent and at a linear rate. The method is compared with alternating direction type methods and applied to the particular case of minimizing a convex function over a finite intersection of closed convex sets.Corresponding author. Partially supported by Air Force Office of Scientific Research Grant 91-0008 and National Science Foundation Grant DMS-9201297.  相似文献   

14.
In this paper, we propose a primal majorized semismooth Newton-CG augmented Lagrangian method for large-scale linearly constrained convex programming problems, especially for some difficult problems. The basic idea of this method is to apply the majorized semismooth Newton-CG augmented Lagrangian method to the primal convex problem. And we take two special nonlinear semidefinite programming problems as examples to illustrate the algorithm. Furthermore, we establish the global convergence and the iteration complexity of the algorithm. Numerical experiments demonstrate that our method works very well for the testing problems, especially for many ill-conditioned ones.  相似文献   

15.
Mathematical Programming - The augmented Lagrangian method (ALM) is extended to a broader-than-ever setting of generalized nonlinear programming in convex and nonconvex optimization that is capable...  相似文献   

16.
This article is concerned with the numerical modeling of unilateral contact problems in an electro-elastic material with Tresca friction law and electrical conductivity condition. First, we prove the existence and uniqueness of the weak solution of the model. Rather than deriving a solution method for the full coupled problem, we present and study a successive iterative (decomposition) method. The idea is to solve successively a displacement subproblem and an electric potential subproblem in block Gauss-Seidel fashion. The displacement subproblem leads to a constraint non-differentiable (convex) minimization problem for which we propose an augmented Lagrangian algorithm. The electric potential unknown is computed explicitly using the Riesz's representation theorem. The convergence of the iterative decomposition method is proved. Some numerical experiments are carried out to illustrate the performances of the proposed algorithm.  相似文献   

17.
Nonlinear rescaling and proximal-like methods in convex optimization   总被引:4,自引:0,他引:4  
The nonlinear rescaling principle (NRP) consists of transforming the objective function and/or the constraints of a given constrained optimization problem into another problem which is equivalent to the original one in the sense that their optimal set of solutions coincides. A nonlinear transformation parameterized by a positive scalar parameter and based on a smooth sealing function is used to transform the constraints. The methods based on NRP consist of sequential unconstrained minimization of the classical Lagrangian for the equivalent problem, followed by an explicit formula updating the Lagrange multipliers. We first show that the NRP leads naturally to proximal methods with an entropy-like kernel, which is defined by the conjugate of the scaling function, and establish that the two methods are dually equivalent for convex constrained minimization problems. We then study the convergence properties of the nonlinear rescaling algorithm and the corresponding entropy-like proximal methods for convex constrained optimization problems. Special cases of the nonlinear rescaling algorithm are presented. In particular a new class of exponential penalty-modified barrier functions methods is introduced. Partially supported by the National Science Foundation, under Grants DMS-9201297, and DMS-9401871. Partially supported by NASA Grant NAG3-1397 and NSF Grant DMS-9403218.  相似文献   

18.
Augmented Lagrangian algorithms are very popular tools for solving nonlinear programming problems. At each outer iteration of these methods a simpler optimization problem is solved, for which efficient algorithms can be used, especially when the problems are large. The most famous Augmented Lagrangian algorithm for minimization with inequality constraints is known as Powell-Hestenes-Rockafellar (PHR) method. The main drawback of PHR is that the objective function of the subproblems is not twice continuously differentiable. This is the main motivation for the introduction of many alternative Augmented Lagrangian methods. Most of them have interesting interpretations as proximal point methods for solving the dual problem, when the original nonlinear programming problem is convex. In this paper a numerical comparison between many of these methods is performed using all the suitable problems of the CUTE collection.This author was supported by ProNEx MCT/CNPq/FAPERJ 171.164/2003, FAPESP (Grants 2001/04597-4 and 2002/00094-0 and 2003/09169-6) and CNPq (Grant 302266/2002-0).This author was partially supported by CNPq-Brasil and CDCHT-Venezuela.This author was supported by ProNEx MCT/CNPq/FAPERJ 171.164/2003, FAPESP (Grant 2001/04597-4) and CNPq.  相似文献   

19.
In this paper a minimization problem with convex objective function subject to a separable convex inequality constraint “≤” and bounded variables (box constraints) is considered. We propose an iterative algorithm for solving this problem based on line search and convergence of this algorithm is proved. At each iteration, a separable convex programming problem with the same constraint set is solved using Karush-Kuhn-Tucker conditions. Convex minimization problems subject to linear equality/ linear inequality “≥” constraint and bounds on the variables are also considered. Numerical illustration is included in support of theory.  相似文献   

20.
Yi Zhang  Liwei Zhang  Yue Wu 《TOP》2014,22(1):45-79
The focus of this paper is on studying an inverse second-order cone quadratic programming problem, in which the parameters in the objective function need to be adjusted as little as possible so that a known feasible solution becomes the optimal one. We formulate this problem as a minimization problem with cone constraints, and its dual, which has fewer variables than the original one, is a semismoothly differentiable (SC 1) convex programming problem with both a linear inequality constraint and a linear second-order cone constraint. We demonstrate the global convergence of the augmented Lagrangian method with an exact solution to the subproblem and prove that the convergence rate of primal iterates, generated by the augmented Lagrangian method, is proportional to 1/r, and the rate of multiplier iterates is proportional to $1/\sqrt{r}$ , where r is the penalty parameter in the augmented Lagrangian. Furthermore, a semismooth Newton method with Armijo line search is constructed to solve the subproblems in the augmented Lagrangian approach. Finally, numerical results are reported to show the effectiveness of the augmented Lagrangian method with both an exact solution and an inexact solution to the subproblem for solving the inverse second-order cone quadratic programming problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号