首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Iteratively reweighted least squares minimization for sparse recovery
Authors:Ingrid Daubechies  Ronald DeVore  Massimo Fornasier  C Si̇nan Güntürk
Institution:1. Princeton University, Department of Mathematics and Program in Applied and Computational Mathematics, Fine Hall, Washington Road, Princeton, NJ 08544;2. University of South Carolina, Department of Mathematics, Columbia, SC 29208;3. Johann Radon Institute for Computational and Applied Mathematics, Austrian Academy of Sciences, Altenbergerstrasse 69, A‐4040 Linz, Austria;4. Courant Institute, 251 Mercer Street, New York, NY 10012
Abstract:Under certain conditions (known as the restricted isometry property, or RIP) on the m × N matrix Φ (where m < N), vectors x ∈ ?N that are sparse (i.e., have most of their entries equal to 0) can be recovered exactly from y := Φx even though Φ?1(y) is typically an (N ? m)—dimensional hyperplane; in addition, x is then equal to the element in Φ?1(y) of minimal ??1‐norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an iteratively reweighted least squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in Φ?1(y) with smallest ??2(w)‐norm. If x(n) is the solution at iteration step n, then the new weight w(n) is defined by wurn:x-wiley:00103640:media:CPA20303:tex2gif-stack-1 := |xurn:x-wiley:00103640:media:CPA20303:tex2gif-stack-2|2 + εurn:x-wiley:00103640:media:CPA20303:tex2gif-stack-3]?1/2, i = 1, …, N, for a decreasing sequence of adaptively defined εn; this updated weight is then used to obtain x(n + 1) and the process is repeated. We prove that when Φ satisfies the RIP conditions, the sequence x(n) converges for all y, regardless of whether Φ?1(y) contains a sparse vector. If there is a sparse vector in Φ?1(y), then the limit is this sparse vector, and when x(n) is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast (linear convergence in the terminology of numerical optimization). The same algorithm with the “heavier” weight wurn:x-wiley:00103640:media:CPA20303:tex2gif-stack-4 = |xurn:x-wiley:00103640:media:CPA20303:tex2gif-stack-5|2 + εurn:x-wiley:00103640:media:CPA20303:tex2gif-stack-6]?1+τ/2, i = 1, …, N, where 0 < τ < 1, can recover sparse solutions as well; more importantly, we show its local convergence is superlinear and approaches a quadratic rate for τ approaching 0. © 2009 Wiley Periodicals, Inc.
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号