For most large underdetermined systems of equations,the minimal 𝓁1‐norm near‐solution approximates the sparsest near‐solution |
| |
Authors: | David L Donoho |
| |
Abstract: | We consider inexact linear equations y ≈ Φx where y is a given vector in ?n, Φ is a given n × m matrix, and we wish to find x0,? as sparse as possible while obeying ‖y ? Φx0,?‖2 ≤ ?. In general, this requires combinatorial optimization and so is considered intractable. On the other hand, the ??1‐minimization problem is convex and is considered tractable. We show that for most Φ, if the optimally sparse approximation x0,? is sufficiently sparse, then the solution x1,? of the ??1‐minimization problem is a good approximation to x0,?. We suppose that the columns of Φ are normalized to the unit ??2‐norm, and we place uniform measure on such Φ. We study the underdetermined case where m ~ τn and τ > 1, and prove the existence of ρ = ρ(τ) > 0 and C = C(ρ, τ) so that for large n and for all Φ's except a negligible fraction, the following approximate sparse solution property of Φ holds: for every y having an approximation ‖y ? Φx0‖2 ≤ ? by a coefficient vector x0 ∈ ?m with fewer than ρ · n nonzeros, This has two implications. First, for most Φ, whenever the combinatorial optimization result x0,? would be very sparse, x1,? is a good approximation to x0,?. Second, suppose we are given noisy data obeying y = Φx0 + z where the unknown x0 is known to be sparse and the noise ‖z‖2 ≤ ?. For most Φ, noise‐tolerant ??1‐minimization will stably recover x0 from y in the presence of noise z. We also study the barely determined case m = n and reach parallel conclusions by slightly different arguments. Proof techniques include the use of almost‐spherical sections in Banach space theory and concentration of measure for eigenvalues of random matrices. © 2006 Wiley Periodicals, Inc. |
| |
Keywords: | |
|
|