首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
指出直接推广的经典乘子交替方向法对三个算子的问题不能保证收敛的原因, 并且给出将其改造成收敛算法的相应策略. 同时, 在一个统一框架下, 证明了修正的乘子交替方向法的收敛性和遍历意义下具有O(1/t)~收敛速率.  相似文献   

2.
针对两个可分凸函数的和在线性约束下的极小化问题,在交替方向法的框架下,提出广义的交替近似梯度算法.在一定的条件下,该算法具有全局及线性收敛性.数值实验表明该算法有好的数值表现.  相似文献   

3.
Alternating direction method (ADM) has been well studied in the context of linearly constrained convex programming problems. Recently, because of its significant efficiency and easy implementation in novel applications, ADM is extended to the case where the number of separable parts is a finite number. The algorithmic framework of the extended method consists of two phases. At each iteration, it first produces a trial point by using the usual alternating direction scheme, and then the next iterate is updated by using a distance-descent direction offered by the trial point. The generated sequence approaches the solution set monotonically in the Fejér sense, and the method is called alternating direction-based contraction (ADBC) method. In this paper, in order to simplify the subproblems in the first phase, we add a proximal term to the objective function of the minimization subproblems. The resulted algorithm is called proximal alternating direction-based contraction (PADBC) methods. In addition, we present different linearized versions of the PADBC methods which substantially broaden the applicable scope of the ADBC method. All the presented algorithms are guided by a general framework of the contraction methods for monotone variational inequalities, and thus, the convergence follows directly.  相似文献   

4.
We consider the linearly constrained separable convex programming, whose objective function is separable into m individual convex functions without coupled variables. The alternating direction method of multipliers has been well studied in the literature for the special case m=2, while it remains open whether its convergence can be extended to the general case m≥3. This note shows the global convergence of this extension when the involved functions are further assumed to be strongly convex.  相似文献   

5.
In this paper, we study an inexact version of the alternating direction method of multipliers (ADMM) for solving two-block separable linearly constrained convex optimization problems. Specifically, the two subproblems in the classic ADMM are allowed to be solved inexactly by certain relative error criteria, in the sense that only two parameters are needed to control the inexactness. Related convergence analysis are established under the assumption that the solution set to the KKT system of the problem is not empty. Numerical results on solving a class of sparse signal recovery problems are also provided to demonstrate the efficiency of the proposed algorithm.  相似文献   

6.
The proximal alternating direction method of multipliers is a popular and useful method for linearly constrained, separable convex problems, especially for the linearized case. In the literature, convergence of the proximal alternating direction method has been established under the assumption that the proximal regularization matrix is positive semi-definite. Recently, it was shown that the regularizing proximal term in the proximal alternating direction method of multipliers does not necessarily have to be positive semi-definite, without any additional assumptions. However, it remains unknown as to whether the indefinite setting is valid for the proximal version of the symmetric alternating direction method of multipliers. In this paper, we confirm that the symmetric alternating direction method of multipliers can also be regularized with an indefinite proximal term. We theoretically prove the global convergence of the indefinite method and establish its worst-case convergence rate in an ergodic sense. In addition, the generalized alternating direction method of multipliers proposed by Eckstein and Bertsekas is a special case in our discussion. Finally, we demonstrate the performance improvements achieved when using the indefinite proximal term through experimental results.  相似文献   

7.
The alternating direction method of multipliers(ADMM for short) is efficient for linearly constrained convex optimization problem. The practical computational c...  相似文献   

8.
We analyze the convergence rate of the alternating direction method of multipliers (ADMM) for minimizing the sum of two or more nonsmooth convex separable functions subject to linear constraints. Previous analysis of the ADMM typically assumes that the objective function is the sum of only two convex functions defined on two separable blocks of variables even though the algorithm works well in numerical experiments for three or more blocks. Moreover, there has been no rate of convergence analysis for the ADMM without strong convexity in the objective function. In this paper we establish the global R-linear convergence of the ADMM for minimizing the sum of any number of convex separable functions, assuming that a certain error bound condition holds true and the dual stepsize is sufficiently small. Such an error bound condition is satisfied for example when the feasible set is a compact polyhedron and the objective function consists of a smooth strictly convex function composed with a linear mapping, and a nonsmooth \(\ell _1\) regularizer. This result implies the linear convergence of the ADMM for contemporary applications such as LASSO without assuming strong convexity of the objective function.  相似文献   

9.
In this paper we propose a variant of the random coordinate descent method for solving linearly constrained convex optimization problems with composite objective functions. If the smooth part of the objective function has Lipschitz continuous gradient, then we prove that our method obtains an ?-optimal solution in $\mathcal{O}(n^{2}/\epsilon)$ iterations, where n is the number of blocks. For the class of problems with cheap coordinate derivatives we show that the new method is faster than methods based on full-gradient information. Analysis for the rate of convergence in probability is also provided. For strongly convex functions our method converges linearly. Extensive numerical tests confirm that on very large problems, our method is much more numerically efficient than methods based on full gradient information.  相似文献   

10.
In this paper, we obtain global pointwise and ergodic convergence rates for a variable metric proximal alternating direction method of multipliers for solving linearly constrained convex optimization problems. We first propose and study nonasymptotic convergence rates of a variable metric hybrid proximal extragradient framework for solving monotone inclusions. Then, the convergence rates for the former method are obtained essentially by showing that it falls within the latter framework. To the best of our knowledge, this is the first time that global pointwise (resp. pointwise and ergodic) convergence rates are obtained for the variable metric proximal alternating direction method of multipliers (resp. variable metric hybrid proximal extragradient framework).  相似文献   

11.
In this paper we develop random block coordinate descent methods for minimizing large-scale linearly constrained convex problems over networks. Since coupled constraints appear in the problem, we devise an algorithm that updates in parallel at each iteration at least two random components of the solution, chosen according to a given probability distribution. Those computations can be performed in a distributed fashion according to the structure of the network. Complexity per iteration of the proposed methods is usually cheaper than that of the full gradient method when the number of nodes in the network is much larger than the number of updated components. On smooth convex problems, we prove that these methods exhibit a sublinear worst-case convergence rate in the expected value of the objective function. Moreover, this convergence rate depends linearly on the number of components to be updated. On smooth strongly convex problems we prove that our methods converge linearly. We also focus on how to choose the probabilities to make our randomized algorithms converge as fast as possible, which leads us to solving a sparse semidefinite program. We then describe several applications that fit in our framework, in particular the convex feasibility problem. Finally, numerical experiments illustrate the behaviour of our methods, showing in particular that updating more than two components in parallel accelerates the method.  相似文献   

12.
It is well recognized the convenience of converting the linearly constrained convex optimization problems to a monotone variational inequality. Recently, we have proposed a unified algorithmic framework which can guide us to construct the solution methods for solving these monotone variational inequalities. In this work, we revisit two full Jacobian decomposition of the augmented Lagrangian methods for separable convex programming which we have studied a few years ago. In particular, exploiting this framework, we are able to give a very clear and elementary proof of the convergence of these solution methods.  相似文献   

13.
A parallel Nesterov algorithm, for solving unconstrained minimization of large scale partially separable convex functions, is presented. The problem is first transformed into a linearly constrained minimization of a separable function. A fast projected gradient (Nesterov) method is then applied to obtain a decomposition method with \(O(1/k^2)\) rate of convergence (where k is the iteration number). Preliminary numerical experiments show the efficiency of the proposed approach.  相似文献   

14.
In this paper, we develop a parameterized proximal point algorithm (P-PPA) for solving a class of separable convex programming problems subject to linear and convex constraints. The proposed algorithm is provable to be globally convergent with a worst-case O(1 / t) convergence rate, where t denotes the iteration number. By properly choosing the algorithm parameters, numerical experiments on solving a sparse optimization problem arising from statistical learning show that our P-PPA could perform significantly better than other state-of-the-art methods, such as the alternating direction method of multipliers and the relaxed proximal point algorithm.  相似文献   

15.
In the present paper, we propose a novel convergence analysis of the alternating direction method of multipliers, based on its equivalence with the overrelaxed primal–dual hybrid gradient algorithm. We consider the smooth case, where the objective function can be decomposed into one differentiable with Lipschitz continuous gradient part and one strongly convex part. Under these hypotheses, a convergence proof with an optimal parameter choice is given for the primal–dual method, which leads to convergence results for the alternating direction method of multipliers. An accelerated variant of the latter, based on a parameter relaxation, is also proposed, which is shown to converge linearly with same asymptotic rate as the primal–dual algorithm.  相似文献   

16.
黎超琼  李锋 《运筹学学报》2010,24(1):101-114
LQP交替方向法是求解可分离结构型单调变分不等式问题的一种非常有效的方法.它不仅可以充分地利用目标函数的可分结构,将原问题分解为多个更易求解的子问题,还更适合求解大规模问题.对于带有三个可分离算子的单调变分不等式问题,结合增广拉格朗日算法和LQP交替方向法提出了一种部分并行分裂LQP交替方向法,构造了新算法的两个下降方向,结合这两个下降方向得到了一个新的下降方向,沿着这个新的下降方向给出了最优步长.并在较弱的假设条件下,证明了新算法的全局收敛性.  相似文献   

17.
The alternating direction method of multipliers (ADMM) is a benchmark for solving a two-block linearly constrained convex minimization model whose objective function is the sum of two functions without coupled variables. Meanwhile, it is known that the convergence is not guaranteed if the ADMM is directly extended to a multiple-block convex minimization model whose objective function has more than two functions. Recently, some authors have actively studied the strong convexity condition on the objective function to sufficiently ensure the convergence of the direct extension of ADMM or the resulting convergence when the original scheme is appropriately twisted. We focus on the three-block case of such a model whose objective function is the sum of three functions, and discuss the convergence of the direct extension of ADMM. We show that when one function in the objective is strongly convex, the penalty parameter and the operators in the linear equality constraint are appropriately restricted, it is sufficient to guarantee the convergence of the direct extension of ADMM. We further estimate the worst-case convergence rate measured by the iteration complexity in both the ergodic and nonergodic senses, and derive the globally linear convergence in asymptotical sense under some additional conditions.  相似文献   

18.
In this paper, we consider the linearly constrained multiobjective minimization, and we propose a new reduced gradient method for solving this problem. Our approach solves iteratively a convex quadratic optimization subproblem to calculate a suitable descent direction for all the objective functions, and then use a bisection algorithm to find an optimal stepsize along this direction. We prove, under natural assumptions, that the proposed algorithm is well-defined and converges globally to Pareto critical points of the problem. Finally, this algorithm is implemented in the MATLAB environment and comparative results of numerical experiments are reported.  相似文献   

19.
Convex optimization methods are used for many machine learning models such as support vector machine. However, the requirement of a convex formulation can place limitations on machine learning models. In recent years, a number of machine learning methods not requiring convexity have emerged. In this paper, we study non-convex optimization problems on the Stiefel manifold in which the feasible set consists of a set of rectangular matrices with orthonormal column vectors. We present examples of non-convex optimization problems in machine learning and apply three nonlinear optimization methods for finding a local optimal solution; geometric gradient descent method, augmented Lagrangian method of multipliers, and alternating direction method of multipliers. Although the geometric gradient method is often used to solve non-convex optimization problems on the Stiefel manifold, we show that the alternating direction method of multipliers generally produces higher quality numerical solutions within a reasonable computation time.  相似文献   

20.
一类不可微二次规划逆问题   总被引:1,自引:0,他引:1  
本文求解了一类二次规划的逆问题,具体为目标函数是矩阵谱范数与向量无穷范数之和的最小化问题.首先将该问题转化为目标函数可分离变量的凸优化问题,提出用G-ADMM法求解.并结合奇异值阈值算法,Moreau-Yosida正则化算法,matlab优化工具箱的quadprog函数来精确求解相应的子问题.而对于其中一个子问题的精确求解过程中发现其仍是目标函数可分离变量的凸优化问题,由于其变量都是矩阵,所以采用适合多个矩阵变量的交替方向法求解,通过引入新的变量,使其每个子问题的解都具有显示表达式.最后给出采用的G-ADMM法求解本文问题的数值实验.数据表明,本文所采用的方法能够高效快速地解决该二次规划逆问题.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号