首页 | 本学科首页   官方微博 | 高级检索  
     


On the convergence of the coordinate descent method for convex differentiable minimization
Authors:Z. Q. Luo  P. Tseng
Affiliation:(1) Communications Research Laboratory, Department of Electrical and Computer Engineering, McMaster University, Hamilton, Ontario, Canada;(2) Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, Massachusetts;(3) Present address: Department of Mathematics, University of Washington, Seattle, Washington
Abstract:The coordinate descent method enjoys a long history in convex differentiable minimization. Surprisingly, very little is known about the convergence of the iterates generated by this method. Convergence typically requires restrictive assumptions such as that the cost function has bounded level sets and is in some sense strictly convex. In a recent work, Luo and Tseng showed that the iterates are convergent for the symmetric monotone linear complementarity problem, for which the cost function is convex quadratic, but not necessarily strictly convex, and does not necessarily have bounded level sets. In this paper, we extend these results to problems for which the cost function is the composition of an affine mapping with a strictly convex function which is twice differentiable in its effective domain. In addition, we show that the convergence is at least linear. As a consequence of this result, we obtain, for the first time, that the dual iterates generated by a number of existing methods for matrix balancing and entropy optimization are linearly convergent.This work was partially supported by the U.S. Army Research Office, Contract No. DAAL03-86-K-0171, by the National Science Foundation, Grant No. ECS-8519058, and by the Science and Engineering Research Board of McMaster University.
Keywords:Coordinate descent  convex differentiable optimization  symmetric linear complementarity problems
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号