首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Recovering low-rank and sparse matrix from a given matrix arises in many applications, such as image processing, video background substraction, and so on. The 3-block alternating direction method of multipliers (ADMM) has been applied successfully to solve convex problems with 3-block variables. However, the existing sufficient conditions to guarantee the convergence of the 3-block ADMM usually require the penalty parameter $\gamma$ to satisfy a certain bound, which may affect the performance of solving the large scale problem in practice. In this paper, we propose the 3-block ADMM to recover low-rank and sparse matrix from noisy observations. In theory, we prove that the 3-block ADMM is convergent when the penalty parameters satisfy a certain condition and the objective function value sequences generated by 3-block ADMM converge to the optimal value. Numerical experiments verify that proposed method can achieve higher performance than existing methods in terms of both efficiency and accuracy.  相似文献   

2.
The alternating direction method of multipliers(ADMM)is a benchmark for solving convex programming problems with separable objective functions and linear constraints.In the literature it has been illustrated as an application of the proximal point algorithm(PPA)to the dual problem of the model under consideration.This paper shows that ADMM can also be regarded as an application of PPA to the primal model with a customized choice of the proximal parameter.This primal illustration of ADMM is thus complemental to its dual illustration in the literature.This PPA revisit on ADMM from the primal perspective also enables us to recover the generalized ADMM proposed by Eckstein and Bertsekas easily.A worst-case O(1/t)convergence rate in ergodic sense is established for a slight extension of Eckstein and Bertsekas’s generalized ADMM.  相似文献   

3.
In this paper, we propose an inexact multi-block ADMM-type first-order method for solving a class of high-dimensional convex composite conic optimization problems to moderate accuracy. The design of this method combines an inexact 2-block majorized semi-proximal ADMM and the recent advances in the inexact symmetric Gauss–Seidel (sGS) technique for solving a multi-block convex composite quadratic programming whose objective contains a nonsmooth term involving only the first block-variable. One distinctive feature of our proposed method (the sGS-imsPADMM) is that it only needs one cycle of an inexact sGS method, instead of an unknown number of cycles, to solve each of the subproblems involved. With some simple and implementable error tolerance criteria, the cost for solving the subproblems can be greatly reduced, and many steps in the forward sweep of each sGS cycle can often be skipped, which further contributes to the efficiency of the proposed method. Global convergence as well as the iteration complexity in the non-ergodic sense is established. Preliminary numerical experiments on some high-dimensional linear and convex quadratic SDP problems with a large number of linear equality and inequality constraints are also provided. The results show that for the vast majority of the tested problems, the sGS-imsPADMM is 2–3 times faster than the directly extended multi-block ADMM with the aggressive step-length of 1.618, which is currently the benchmark among first-order methods for solving multi-block linear and quadratic SDP problems though its convergence is not guaranteed.  相似文献   

4.
In this article, we present a fast and stable algorithm for solving a class of optimization problems that arise in many statistical estimation procedures, such as sparse fused lasso over a graph, convex clustering, and trend filtering, among others. We propose a so-called augmented alternating direction methods of multipliers (ADMM) algorithm to solve this class of problems. Compared to a standard ADMM algorithm, our proposal significantly reduces the computational cost at each iteration while maintaining roughly the same overall convergence speed. We also consider a new varying penalty scheme for the ADMM algorithm, which could further accelerate the convergence, especially when solving a sequence of problems with tuning parameters of different scales. Extensive numerical experiments on the sparse fused lasso problem show that the proposed algorithm is more efficient than the standard ADMM and two other existing state-of-the-art specialized algorithms. Finally, we discuss a possible extension and some interesting connections to two well-known algorithms. Supplementary materials for the article are available online.  相似文献   

5.
The alternating direction method of multipliers(ADMM)is a widely used method for solving many convex minimization models arising in signal and image processing.In this paper,we propose an inertial ADMM for solving a two-block separable convex minimization problem with linear equality constraints.This algorithm is obtained by making use of the inertial Douglas-Rachford splitting algorithm to the corresponding dual of the primal problem.We study the convergence analysis of the proposed algorithm in infinite-dimensional Hilbert spaces.Furthermore,we apply the proposed algorithm on the robust principal component analysis problem and also compare it with other state-of-the-art algorithms.Numerical results demonstrate the advantage of the proposed algorithm.  相似文献   

6.
In this paper, we study an inexact version of the alternating direction method of multipliers (ADMM) for solving two-block separable linearly constrained convex optimization problems. Specifically, the two subproblems in the classic ADMM are allowed to be solved inexactly by certain relative error criteria, in the sense that only two parameters are needed to control the inexactness. Related convergence analysis are established under the assumption that the solution set to the KKT system of the problem is not empty. Numerical results on solving a class of sparse signal recovery problems are also provided to demonstrate the efficiency of the proposed algorithm.  相似文献   

7.
We analyze the convergence rate of the alternating direction method of multipliers (ADMM) for minimizing the sum of two or more nonsmooth convex separable functions subject to linear constraints. Previous analysis of the ADMM typically assumes that the objective function is the sum of only two convex functions defined on two separable blocks of variables even though the algorithm works well in numerical experiments for three or more blocks. Moreover, there has been no rate of convergence analysis for the ADMM without strong convexity in the objective function. In this paper we establish the global R-linear convergence of the ADMM for minimizing the sum of any number of convex separable functions, assuming that a certain error bound condition holds true and the dual stepsize is sufficiently small. Such an error bound condition is satisfied for example when the feasible set is a compact polyhedron and the objective function consists of a smooth strictly convex function composed with a linear mapping, and a nonsmooth \(\ell _1\) regularizer. This result implies the linear convergence of the ADMM for contemporary applications such as LASSO without assuming strong convexity of the objective function.  相似文献   

8.
The alternating direction method of multipliers (ADMM) has been proved to be effective for solving separable convex optimization subject to linear constraints. In this paper, we propose a generalized symmetric ADMM (GS-ADMM), which updates the Lagrange multiplier twice with suitable stepsizes, to solve the multi-block separable convex programming. This GS-ADMM partitions the data into two group variables so that one group consists of p block variables while the other has q block variables, where \(p \ge 1\) and \(q \ge 1\) are two integers. The two grouped variables are updated in a Gauss–Seidel scheme, while the variables within each group are updated in a Jacobi scheme, which would make it very attractive for a big data setting. By adding proper proximal terms to the subproblems, we specify the domain of the stepsizes to guarantee that GS-ADMM is globally convergent with a worst-case \({\mathcal {O}}(1/t)\) ergodic convergence rate. It turns out that our convergence domain of the stepsizes is significantly larger than other convergence domains in the literature. Hence, the GS-ADMM is more flexible and attractive on choosing and using larger stepsizes of the dual variable. Besides, two special cases of GS-ADMM, which allows using zero penalty terms, are also discussed and analyzed. Compared with several state-of-the-art methods, preliminary numerical experiments on solving a sparse matrix minimization problem in the statistical learning show that our proposed method is effective and promising.  相似文献   

9.
《Optimization》2012,61(6):627-639
Abstract: In this article, we consider the concave quadratic programming problem which is known to be NP hard. Based on the improved global optimality conditions by [Dür, M., Horst, R. and Locatelli, M., 1998, Necessary and sufficient global optimality conditions for convex maximization revisited, Journal of Mathematical Analysis and Applications, 217, 637–649] and [Hiriart-Urruty, J.B. and Ledyav, J.S., 1996, A note in the characterization of the global maxima of a convex function over a convex set, Journal of Convex Analysis, 3, 55–61], we develop a new approach for solving concave quadratic programming problems. The main idea of the algorithms is to generate a sequence of local minimizers either ending at a global optimal solution or at an approximate global optimal solution within a finite number of iterations. At each iteration of the algorithms we solve a number of linear programming problems with the same constraints of the original problem. We also present the convergence properties of the proposed algorithms under some conditions. The efficiency of the algorithms has been demonstrated with some numerical examples.  相似文献   

10.
This paper presents a decomposition algorithm for solving convex programming problems with separable structure. The algorithm is obtained through application of the alternating direction method of multipliers to the dual of the convex programming problem to be solved. In particular, the algorithm reduces to the ordinary method of multipliers when the problem is regarded as nonseparable. Under the assumption that both primal and dual problems have at least one solution and the solution set of the primal problem is bounded, global convergence of the algorithm is established.  相似文献   

11.
This note serves two purposes. Firstly, we construct a counterexample to show that the statement on the convergence of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex optimization problems in a highly influential paper by Boyd et al. (Found Trends Mach Learn 3(1):1–122, 2011) can be false if no prior condition on the existence of solutions to all the subproblems involved is assumed to hold. Secondly, we present fairly mild conditions to guarantee the existence of solutions to all the subproblems of the ADMM and provide a rigorous convergence analysis on the ADMM with a computationally more attractive large step-length that can even exceed the practically much preferred golden ratio of \((1+\sqrt{5})/2\).  相似文献   

12.
In this paper we explore the relations between the standard dual problem of a convex generalized fractional programming problem and the partial dual problem proposed by Barros et al. (1994). Taking into account the similarities between these dual problems and using basic duality results we propose a new algorithm to directly solve the standard dual of a convex generalized fractional programming problem, and hence the original primal problem, if strong duality holds. This new algorithm works in a similar way as the algorithm proposed in Barros et al. (1994) to solve the partial dual problem. Although the convergence rates of both algorithms are similar, the new algorithm requires slightly more restrictive assumptions to ensure a superlinear convergence rate. An important characteristic of the new algorithm is that it extends to the nonlinear case the Dinkelbach-type algorithm of Crouzeix et al. (1985) applied to the standard dual problem of a generalized linear fractional program. Moreover, the general duality results derived for the nonlinear case, yield an alternative way to derive the standard dual of a generalized linear fractional program. The numerical results, in case of quadratic-linear ratios and linear constraints, show that solving the standard dual via the new algorithm is in most cases more efficient than applying directly the Dinkelbach-type algorithm to the original generalized fractional programming problem. However, the numerical results also indicate that solving the alternative dual (Barros et al., 1994) is in general more efficient than solving the standard dual.This research was carried out at the Econometric Institute, Erasmus University Rotterdam, the Netherlands and was supported by the Tinbergen Institute Rotterdam  相似文献   

13.
In this paper, we propose a parallel decomposition algorithm for solving a class of convex optimization problems, which is broad enough to contain ordinary convex programming problems with a strongly convex objective function. The algorithm is a variant of the trust region method applied to the Fenchel dual of the given problem. We prove global convergence of the algorithm and report some computational experience with the proposed algorithm on the Connection Machine Model CM-5.  相似文献   

14.
During the last decade, the state-of-the-art alternating direction method of multipliers (ADMM) has successfully been used to solve many two-block separable convex minimization problems arising from several applied areas such as signal/image processing and statistical and machine learning. It however remains an interesting problem of how to implement ADMM to three-block separable convex minimization problems as required by the situation where many objective functions in the above-mentioned areas are actually more conveniently decomposed to the sum of three convex functions, due also to the observation that the straightforward extension of ADMM from the two-block case to the three-block case is apparently not convergent. In this paper, we shall introduce a new algorithm that is called a partially isochronous splitting algorithm (PISA) in order to implement ADMM for the three-block separable model. The main idea of our algorithm is to incorporate only one proximal term into the last subproblem of the extended ADMM so that the resulting algorithm maximally inherits the promising properties of ADMM. A remarkable superiority over the extended ADMM is that we can simultaneously solve two of the subproblems, thereby taking advantages of the separable structure and parallel architectures. Theoretically, we will establish the global convergence of our algorithm under standard conditions, and also the O(1/t) rate of convergence in both ergodic and nonergodic senses, where t is the iteration counter. The computational competitiveness of our algorithm is shown by numerical experiments on an application to the well-tested robust principal component analysis model.  相似文献   

15.
For certain types of mathematical programming problems, a related dual problem can be constructed in which the objective value of the dual problem is equal to the objective function of the given problem. If these two problems do not have equal values, a duality gap is said to exist. No such gap exists for pairs of ordinary dual linear programming problems, but this is not the case for linear programming problems in which the nonnegativity conditionx ? 0 is replaced by the condition thatx lies in a certain convex setK. Duffin (Ref. 1) has shown that, whenK is a cone and a certain interiority condition is fulfilled, there will be no duality gap. In this note, we show that no duality gap exists when the interiority condition is satisfied andK is an arbitrary closed convex set inR n .  相似文献   

16.
In this paper, we introduce a new dual program, which is representable as a semidefinite linear programming problem, for a primal convex minimax programming problem, and we show that there is no duality gap between the primal and the dual whenever the functions involved are sum-of-squares convex polynomials. Under a suitable constraint qualification, we derive strong duality results for this class of minimax problems. Consequently, we present applications of our results to robust sum-of-squares convex programming problems under data uncertainty and to minimax fractional programming problems with sum-of-squares convex polynomials. We obtain these results by first establishing sum-of-squares polynomial representations of non-negativity of a convex max function over a system of sum-of-squares convex constraints. The new class of sum-of-squares convex polynomials is an important subclass of convex polynomials and it includes convex quadratic functions and separable convex polynomials. The sum-of-squares convexity of polynomials can numerically be checked by solving semidefinite programming problems whereas numerically verifying convexity of polynomials is generally very hard.  相似文献   

17.
The aim of this paper is to develop an efficient algorithm for solving a class of unconstrained nondifferentiable convex optimization problems in finite dimensional spaces. To this end we formulate first its Fenchel dual problem and regularize it in two steps into a differentiable strongly convex one with Lipschitz continuous gradient. The doubly regularized dual problem is then solved via a fast gradient method with the aim of accelerating the resulting convergence scheme. The theoretical results are finally applied to an l 1 regularization problem arising in image processing.  相似文献   

18.
ABSTRACT

In this paper, we study a constrained utility maximization problem following the convex duality approach. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of forward and backward stochastic differential equations (FBSDEs) plus some additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint process coming from the dual FBSDEs in a dynamic fashion and vice versa. We also find that the optimal wealth process coincides with the adjoint process of the dual problem and vice versa. Finally we solve three constrained utility maximization problems, which contrasts the simplicity of the duality approach we propose and the technical complexity of solving the primal problem directly.  相似文献   

19.
稀疏线性规划在金融计算、工业生产、装配调度等领域应用十分广泛.本文首先给出稀疏线性规划问题的一般模型并证明问题是NP困难问题;其次采用交替方向乘子法(ADMM)求解该问题;最后证明了算法在近似问题上的收敛性.数值实验表明,算法在大规模数值算例上的表现优于已有的混合遗传算法;同时通过对金融实例的计算验证了算法及模型在稀疏投资组合问题上的有效性.  相似文献   

20.
增广Lagrange方法是求解非线性规划的一种有效方法.从一新的角度证明不等式约束非线性非光滑凸优化问题的增广Lagrange方法的收敛性.用常步长梯度法的收敛性定理证明基于增广Lagrange函数的对偶问题的常步长梯度方法的收敛性,由此得到增广Lagrange方法乘子迭代的全局收敛性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号