首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 187 毫秒
1.
We consider efficient methods for the recovery of block sparse signals from underdetermined system of linear equations. We show that if the measurement matrix satisfies the block RIP with δ2s 0.4931, then every block s-sparse signal can be recovered through the proposed mixed l2 /l1 -minimization approach in the noiseless case and is stably recovered in the presence of noise and mismodeling error. This improves the result of Eldar and Mishali (in IEEE Trans. Inform. Theory 55: 5302-5316, 2009). We also give another sufficient condition on block RIP for such recovery method: δs 0.307.  相似文献   

2.
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ? 1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms ? 1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted ? 1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations—not by reweighting the ? 1 norm of the coefficient sequence as is common, but by reweighting the ? 1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as Compressive Sensing.  相似文献   

3.
Iterative Thresholding for Sparse Approximations   总被引:7,自引:0,他引:7  
Sparse signal expansions represent or approximate a signal using a small number of elements from a large collection of elementary waveforms. Finding the optimal sparse expansion is known to be NP hard in general and non-optimal strategies such as Matching Pursuit, Orthogonal Matching Pursuit, Basis Pursuit and Basis Pursuit De-noising are often called upon. These methods show good performance in practical situations, however, they do not operate on the ? 0 penalised cost functions that are often at the heart of the problem. In this paper we study two iterative algorithms that are minimising the cost functions of interest. Furthermore, each iteration of these strategies has computational complexity similar to a Matching Pursuit iteration, making the methods applicable to many real world problems. However, the optimisation problem is non-convex and the strategies are only guaranteed to find local solutions, so good initialisation becomes paramount. We here study two approaches. The first approach uses the proposed algorithms to refine the solutions found with other methods, replacing the typically used conjugate gradient solver. The second strategy adapts the algorithms and we show on one example that this adaptation can be used to achieve results that lie between those obtained with Matching Pursuit and those found with Orthogonal Matching Pursuit, while retaining the computational complexity of the Matching Pursuit algorithm.  相似文献   

4.
The Lawson-Hanson with Deviation Maximization (LHDM) method is a block algorithm for the solution of NonNegative Least Squares (NNLS) problems. In this work we devise an improved version of LHDM and we show that it terminates in a finite number of steps, unlike the previous version, originally developed for a special class of matrices. Moreover, we are concerned with finding sparse solutions of underdetermined linear systems by means of NNLS. An extensive campaign of experiments is performed in order to evaluate the performance gain with respect to the standard Lawson-Hanson algorithm. We also show the ability of LHDM to retrieve sparse solutions, comparing it against several 1 $$ {\ell}_1 $$ -minimization solvers in terms of solution quality and time-to-solution on a large set of dense instances.  相似文献   

5.
迭代支撑探测算法是基于截断的基追踪(Basis Pursuit,BP)模型的一种l_1最小化信号重构算法,它可以实现信号的快速重构并且所需要的观测值比经典的L1算法以及迭代加权L1算法更少.本文针对非零元具有快速退化分布性质的稀疏信号,提出了一种改进算法一一基于截断的加权BP模型的迭代支撑探测算法.在迭代的过程中,改进的算法探测原信号支撑集中元素的同时调整重构模型的权值,使得重构模型更有利于实现信号的精确重构.根据所考虑的信号的非零元具有快速退化分布性质这样的先验信息,利用阈值法则探测原信号支撑集中的元素.最后通过Matlab数值实验实现了算法,验证了基于截断的加权BP模型的迭代支撑探测算法比迭代加权L1算法需要的观测值更少,并且比迭代加权L1算法以及传统的迭代支撑探测算法需要更少的重构时间就可以实现信号的精确重构.  相似文献   

6.
A full-rank under-determined linear system of equations Ax = b has in general infinitely many possible solutions. In recent years there is a growing interest in the sparsest solution of this equation—the one with the fewest non-zero entries, measured by ∥x0. Such solutions find applications in signal and image processing, where the topic is typically referred to as “sparse representation”. Considering the columns of A as atoms of a dictionary, it is assumed that a given signal b is a linear composition of few such atoms. Recent work established that if the desired solution x is sparse enough, uniqueness of such a result is guaranteed. Also, pursuit algorithms, approximation solvers for the above problem, are guaranteed to succeed in finding this solution.Armed with these recent results, the problem can be reversed, and formed as an implied matrix factorization problem: Given a set of vectors {bi}, known to emerge from such sparse constructions, Axi = bi, with sufficiently sparse representations xi, we seek the matrix A. In this paper we present both theoretical and algorithmic studies of this problem. We establish the uniqueness of the dictionary A, depending on the quantity and nature of the set {bi}, and the sparsity of {xi}. We also describe a recently developed algorithm, the K-SVD, that practically find the matrix A, in a manner similar to the K-Means algorithm. Finally, we demonstrate this algorithm on several stylized applications in image processing.  相似文献   

7.
Orthogonal matching pursuit(OMP)algorithm is an efcient method for the recovery of a sparse signal in compressed sensing,due to its ease implementation and low complexity.In this paper,the robustness of the OMP algorithm under the restricted isometry property(RIP) is presented.It is shown that δK+√KθK,11is sufcient for the OMP algorithm to recover exactly the support of arbitrary K-sparse signal if its nonzero components are large enough for both l2bounded and l∞bounded noises.  相似文献   

8.
Orthogonal multi-matching pursuit(OMMP)is a natural extension of orthogonal matching pursuit(OMP)in the sense that N(N≥1)indices are selected per iteration instead of 1.In this paper,the theoretical performance of OMMP under the restricted isometry property(RIP)is presented.We demonstrate that OMMP can exactly recover any K-sparse signal from fewer observations y=φx,provided that the sampling matrixφsatisfiesδKN-N+1+(K/N)~(1/2)θKN-N+1,N1.Moreover,the performance of OMMP for support recovery from noisy observations is also discussed.It is shown that,for l_2 bounded and l_∞bounded noisy cases,OMMP can recover the true support of any K-sparse signal under conditions on the restricted isometry property of the sampling matrixφand the minimum magnitude of the nonzero components of the signal.  相似文献   

9.
Boosting in the context of linear regression has become more attractive with the invention of least angle regression (LARS), where the connection between the lasso and forward stagewise fitting (boosting) has been established. Earlier it has been found that boosting is a functional gradient optimization. Instead of the gradient, we propose a conjugate direction method (CDBoost). As a result, we obtain a fast forward stepwise variable selection algorithm. The conjugate direction of CDBoost is analogous to the constrained gradient in boosting. Using this analogy, we generalize CDBoost to: (1) include small step sizes (shrinkage) which often improves prediction accuracy; and (2) the nonparametric setting with fitting methods such as trees or splines, where least angle regression and the lasso seem to be unfeasible. The step size in CDBoost has a tendency to govern the degree between L0- and L1-penalization. This makes CDBoost surprisingly flexible. We compare the different methods on simulated and real datasets. CDBoost achieves the best predictions mainly in complicated settings with correlated covariates, where it is difficult to determine the contribution of a given covariate to the response. The gain of CDBoost over boosting is especially high in sparse cases with high signal to noise ratio and few effective covariates.  相似文献   

10.
CoSaMP: Iterative signal recovery from incomplete and inaccurate samples   总被引:37,自引:0,他引:37  
Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix–vector multiplies with the sampling matrix. For compressible signals, the running time is just O(Nlog2N), where N is the length of the signal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号