首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 93 毫秒
1.
The alternating direction method of multipliers (ADMM) has been proved to be effective for solving separable convex optimization subject to linear constraints. In this paper, we propose a generalized symmetric ADMM (GS-ADMM), which updates the Lagrange multiplier twice with suitable stepsizes, to solve the multi-block separable convex programming. This GS-ADMM partitions the data into two group variables so that one group consists of p block variables while the other has q block variables, where \(p \ge 1\) and \(q \ge 1\) are two integers. The two grouped variables are updated in a Gauss–Seidel scheme, while the variables within each group are updated in a Jacobi scheme, which would make it very attractive for a big data setting. By adding proper proximal terms to the subproblems, we specify the domain of the stepsizes to guarantee that GS-ADMM is globally convergent with a worst-case \({\mathcal {O}}(1/t)\) ergodic convergence rate. It turns out that our convergence domain of the stepsizes is significantly larger than other convergence domains in the literature. Hence, the GS-ADMM is more flexible and attractive on choosing and using larger stepsizes of the dual variable. Besides, two special cases of GS-ADMM, which allows using zero penalty terms, are also discussed and analyzed. Compared with several state-of-the-art methods, preliminary numerical experiments on solving a sparse matrix minimization problem in the statistical learning show that our proposed method is effective and promising.  相似文献   

2.
In this paper, we propose and analyze an accelerated augmented Lagrangian method (denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k 2) while the convergence rate of the classical augmented Lagrangian method (ALM) is O(1/k). Numerical experiments on the linearly constrained l 1?l 2 minimization problem are presented to demonstrate the effectiveness of AALM.  相似文献   

3.
A customized Douglas-Rachford splitting method (DRSM) was recently proposed to solve two-block separable convex optimization problems with linear constraints and simple abstract constraints. The algorithm has advantage over the well-known alternating direction method of multipliers (ADMM), the dual application of DRSM to the two-block convex minimization problem, in the sense that the subproblems can have larger opportunity of possessing closed-form solutions since they are unconstrained. In this paper, we further study along this way by considering the primal application of DRSM for the general case m≥3, i.e., we consider the multi-block separable convex minimization problem with linear constraints where the objective function is separable into m individual convex functions without coupled variables. The resulting method fully exploits the separable structure and enjoys decoupled subproblems which can be solved simultaneously. Both the exact and inexact versions of the new method are presented in a unified framework. Under mild conditions, we manage to prove the global convergence of the algorithm. Preliminary numerical experiments for extracting the background from corrupted surveillance video verify the encouraging efficiency of the new algorithm.  相似文献   

4.
In this paper, we provide a unified iteration complexity analysis for a family of general block coordinate descent methods, covering popular methods such as the block coordinate gradient descent and the block coordinate proximal gradient, under various different coordinate update rules. We unify these algorithms under the so-called block successive upper-bound minimization (BSUM) framework, and show that for a broad class of multi-block nonsmooth convex problems, all algorithms covered by the BSUM framework achieve a global sublinear iteration complexity of \(\mathcal{{O}}(1/r)\), where r is the iteration index. Moreover, for the case of block coordinate minimization where each block is minimized exactly, we establish the sublinear convergence rate of O(1/r) without per block strong convexity assumption.  相似文献   

5.
We analyze the convergence rate of the alternating direction method of multipliers (ADMM) for minimizing the sum of two or more nonsmooth convex separable functions subject to linear constraints. Previous analysis of the ADMM typically assumes that the objective function is the sum of only two convex functions defined on two separable blocks of variables even though the algorithm works well in numerical experiments for three or more blocks. Moreover, there has been no rate of convergence analysis for the ADMM without strong convexity in the objective function. In this paper we establish the global R-linear convergence of the ADMM for minimizing the sum of any number of convex separable functions, assuming that a certain error bound condition holds true and the dual stepsize is sufficiently small. Such an error bound condition is satisfied for example when the feasible set is a compact polyhedron and the objective function consists of a smooth strictly convex function composed with a linear mapping, and a nonsmooth \(\ell _1\) regularizer. This result implies the linear convergence of the ADMM for contemporary applications such as LASSO without assuming strong convexity of the objective function.  相似文献   

6.
A dual scheme for solving a crack problem in terms of displacements is considered. The dual solution method is based on a modified Lagrange functional. The convergence of the method is investigated under a natural assumption of H 1-regularity of the solution to the crack problem. A duality relation for the primal and dual problems is proposed.  相似文献   

7.
In this paper, we develop a parameterized proximal point algorithm (P-PPA) for solving a class of separable convex programming problems subject to linear and convex constraints. The proposed algorithm is provable to be globally convergent with a worst-case O(1 / t) convergence rate, where t denotes the iteration number. By properly choosing the algorithm parameters, numerical experiments on solving a sparse optimization problem arising from statistical learning show that our P-PPA could perform significantly better than other state-of-the-art methods, such as the alternating direction method of multipliers and the relaxed proximal point algorithm.  相似文献   

8.
Recently, the alternating direction method of multipliers (ADMM) has found many efficient applications in various areas; and it has been shown that the convergence is not guaranteed when it is directly extended to the multiple-block case of separable convex minimization problems where there are m ≥ 3 functions without coupled variables in the objective. This fact has given great impetus to investigate various conditions on both the model and the algorithm’s parameter that can ensure the convergence of the direct extension of ADMM (abbreviated as “e-ADMM”). Despite some results under very strong conditions (e.g., at least (m ? 1) functions should be strongly convex) that are applicable to the generic case with a general m, some others concentrate on the special case of m = 3 under the relatively milder condition that only one function is assumed to be strongly convex. We focus on extending the convergence analysis from the case of m = 3 to the more general case of m ≥ 3. That is, we show the convergence of e-ADMM for the case of m ≥ 3 with the assumption of only (m ? 2) functions being strongly convex; and establish its convergence rates in different scenarios such as the worst-case convergence rates measured by iteration complexity and the globally linear convergence rate under stronger assumptions. Thus the convergence of e-ADMM for the general case of m ≥ 4 is proved; this result seems to be still unknown even though it is intuitive given the known result of the case of m = 3. Even for the special case of m = 3, our convergence results turn out to be more general than the existing results that are derived specifically for the case of m = 3.  相似文献   

9.
We propose a new modified primal–dual proximal best approximation method for solving convex not necessarily differentiable optimization problems. The novelty of the method relies on introducing memory by taking into account iterates computed in previous steps in the formulas defining current iterate. To this end we consider projections onto intersections of halfspaces generated on the basis of the current as well as the previous iterates. To calculate these projections we are using recently obtained closed-form expressions for projectors onto polyhedral sets. The resulting algorithm with memory inherits strong convergence properties of the original best approximation proximal primal–dual algorithm. Additionally, we compare our algorithm with the original (non-inertial) one with the help of the so called attraction property defined below. Extensive numerical experimental results on image reconstruction problems illustrate the advantages of including memory into the original algorithm.  相似文献   

10.
In this paper, we prove new complexity bounds for methods of convex optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables. This conclusion is true for both nonsmooth and smooth problems. For the latter class, we present also an accelerated scheme with the expected rate of convergence \(O\Big ({n^2 \over k^2}\Big )\), where k is the iteration counter. For stochastic optimization, we propose a zero-order scheme and justify its expected rate of convergence \(O\Big ({n \over k^{1/2}}\Big )\). We give also some bounds for the rate of convergence of the random gradient-free methods to stationary points of nonconvex functions, for both smooth and nonsmooth cases. Our theoretical results are supported by preliminary computational experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号