首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Sigma Delta (\(\Sigma \Delta \)) quantization, a quantization method first surfaced in the 1960s, has now been widely adopted in various digital products such as cameras, cell phones, radars, etc. The method features a great robustness with respect to quantization noises through sampling an input signal at a Super-Nyquist rate. Compressed sensing (CS) is a frugal acquisition method that utilizes the sparsity structure of an objective signal to reduce the number of samples required for a lossless acquisition. By deeming this reduced number as an effective dimensionality of the set of sparse signals, one can define a relative oversampling/subsampling rate as the ratio between the actual sampling rate and the effective dimensionality. When recording these “compressed” analog measurements via Sigma Delta quantization, a natural question arises: will the signal reconstruction error previously shown to decay polynomially as the increase of the vanilla oversampling rate for the case of band-limited functions, now be decaying polynomially as that of the relative oversampling rate? Answering this question is one of the main goals in this direction. The study of quantization in CS has so far been limited to proving error convergence results for Gaussian and sub-Gaussian sensing matrices, as the number of bits and/or the number of samples grow to infinity. In this paper, we provide a first result for the more realistic Fourier sensing matrices. The main idea is to randomly permute the Fourier samples before feeding them into the quantizer. We show that the random permutation can effectively increase the low frequency power of the measurements, thus enhance the quality of \(\Sigma \Delta \) quantization.  相似文献   

2.
This paper deals with two related problems, namely distance-preserving binary embeddings and quantization for compressed sensing. First, we propose fast methods to replace points from a subset Χ ⊂ ℝn , associated with the euclidean metric, with points in the cube {±1}m , and we associate the cube with a pseudometric that approximates euclidean distance among points in Χ. Our methods rely on quantizing fast Johnson-Lindenstrauss embeddings based on bounded orthonormal systems and partial circulant ensembles, both of which admit fast transforms. Our quantization methods utilize noise shaping, and include sigma-delta schemes and distributed noise-shaping schemes. The resulting approximation errors decay polynomially and exponentially fast in m, depending on the embedding method. This dramatically outperforms the current decay rates associated with binary embeddings and Hamming distances. Additionally, it is the first such binary embedding result that applies to fast Johnson-Lindenstrauss maps while preserving 2 norms. Second, we again consider noise-shaping schemes, albeit this time to quantize compressed sensing measurements arising from bounded orthonormal ensembles and partial circulant matrices. We show that these methods yield a reconstruction error that again decays with the number of measurements (and bits), when using convex optimization for reconstruction. Specifically, for sigma-delta schemes, the error decays polynomially in the number of measurements, and it decays exponentially for distributed noise-shaping schemes based on beta encoding. These results are near optimal and the first of their kind dealing with bounded orthonormal systems. © 2019 Wiley Periodicals, Inc.  相似文献   

3.
Redundant systems such as frames are often used to represent a signal for error correction, denoising and general robustness. In the digital domain quantization needs to be performed. Given the redundancy, the distribution of quantization errors can be rather complex. In this paper we study quantization error for a signal X in represented by a frame using a lattice quantizer. We completely characterize the asymptotic distribution of the quantization error as the cell size of the lattice goes to zero. We apply these results to get the necessary and sufficient conditions for the asymptotic form of the White Noise Hypothesis in the case of the pulse-code modulation scheme.  相似文献   

4.
We study the recovery conditions of weighted mixed $\ell_2/\ell_p$ minimization for block sparse signal reconstruction from compressed measurements when partial block support information is available. We show theoretically that the extended block restricted isometry property can ensure robust recovery when the data fidelity constraint is expressed in terms of an $\ell_q$ norm of the residual error, thus establishing a setting wherein we are not restricted to Gaussian measurement noise. We illustrate the results with a series of numerical experiments.  相似文献   

5.
Sampling information using timing is an approach that has received renewed attention in sampling theory. The question is how to map amplitude information into the timing domain. One such encoder, called time encoding machine, was introduced by Lazar and Tóth (2004 [23]) for the special case of band-limited functions. In this paper, we extend their result to a general framework including shift-invariant subspaces. We prove that time encoding machines may be considered as non-uniform sampling devices, where time locations are unknown a priori. Using this fact, we show that perfect representation and reconstruction of a signal with a time encoding machine is possible whenever this device satisfies some density property. We prove that this method is robust under timing quantization, and therefore can lead to the design of simple and energy efficient sampling devices.  相似文献   

6.
Transform-based image codec follows the basic principle: the reconstructed quality is decided by the quantization level. Compressive sensing (CS) breaks the limit and states that sparse signals can be perfectly recovered from incomplete or even corrupted information by solving convex optimization. Under the same acquisition of images, if images are represented sparsely enough, they can be reconstructed more accurately by CS recovery than inverse transform. So, in this paper, we utilize a modified TV operator to enhance image sparse representation and reconstruction accuracy, and we acquire image information from transform coefficients corrupted by quantization noise. We can reconstruct the images by CS recovery instead of inverse transform. A CS-based JPEG decoding scheme is obtained and experimental results demonstrate that the proposed methods significantly improve the PSNR and visual quality of reconstructed images compared with original JPEG decoder.  相似文献   

7.
Image decoding optimization based on compressive sensing   总被引:1,自引:0,他引:1  
Transform-based image codec follows the basic principle: the reconstructed quality is decided by the quantization level. Compressive sensing (CS) breaks the limit and states that sparse signals can be perfectly recovered from incomplete or even corrupted information by solving convex optimization. Under the same acquisition of images, if images are represented sparsely enough, they can be reconstructed more accurately by CS recovery than inverse transform. So, in this paper, we utilize a modified TV operator to enhance image sparse representation and reconstruction accuracy, and we acquire image information from transform coefficients corrupted by quantization noise. We can reconstruct the images by CS recovery instead of inverse transform. A CS-based JPEG decoding scheme is obtained and experimental results demonstrate that the proposed methods significantly improve the PSNR and visual quality of reconstructed images compared with original JPEG decoder.  相似文献   

8.
陈凤华  李双安 《数学杂志》2016,36(6):1291-1298
本义研究了压缩感知在大规模信号恢复问题中应用的问题.利用修正HS共轭梯度法及光滑化方法,获得了具有较好重构效果的算法.数值实验表明用修正HS共轭梯度法解决大规模信号恢复问题是可行的.  相似文献   

9.
We analyze a multiple-input multiple-output (MIMO) radar model and provide recovery results for a compressed sensing (CS) approach. In MIMO radar different pulses are emitted by several transmitters and the echoes are recorded at several receiver nodes. Under reasonable assumptions the transformation from emitted pulses to the received echoes can approximately be regarded as linear. For the considered model, and many radar tasks in general, sparsity of targets within the considered angle-range-Doppler domain is a natural assumption. Therefore, it is possible to apply methods from CS in order to reconstruct the parameters of the targets. Assuming Gaussian random pulses the resulting measurement matrix becomes a highly structured random matrix. Our first main result provides an estimate for the well-known restricted isometry property (RIP) ensuring stable and robust recovery. We require more measurements than standard results from CS, like for example those for Gaussian random measurements. Nevertheless, we show that due to the special structure of the considered measurement matrix our RIP result is in fact optimal (up to possibly logarithmic factors). Our further two main results on nonuniform recovery (i.e., for a fixed sparse target scene) reveal how the fine structure of the support set—not only the size—affects the (nonuniform) recovery performance. We show that for certain “balanced” support sets reconstruction with essentially the optimal number of measurements is possible. Indeed, we introduce a parameter measuring the well-behavedness of the support set and resemble standard results from CS for near-optimal parameter choices. We prove recovery results for both perfect recovery of the support set in case of exactly sparse vectors and an \(\ell _2\)-norm approximation result for reconstruction under sparsity defect. Our analysis complements earlier work by Strohmer & Friedlander and deepens the understanding of the considered MIMO radar model. Thereby—and apparently for the first time in CS theory—we prove theoretical results in which the difference between nonuniform and uniform recovery consists of more than just logarithmic factors.  相似文献   

10.
Parallel acquisition systems are employed successfully in a variety of different sensing applications when a single sensor cannot provide enough measurements for a high-quality reconstruction. In this paper, we consider compressed sensing (CS) for parallel acquisition systems when the individual sensors use subgaussian random sampling. Our main results are a series of uniform recovery guarantees which relate the number of measurements required to the basis in which the solution is sparse and certain characteristics of the multi-sensor system, known as sensor profile matrices. In particular, we derive sufficient conditions for optimal recovery, in the sense that the number of measurements required per sensor decreases linearly with the total number of sensors, and demonstrate explicit examples of multi-sensor systems for which this holds. We establish these results by proving the so-called Asymmetric Restricted Isometry Property (ARIP) for the sensing system and use this to derive both nonuniversal and universal recovery guarantees. Compared to existing work, our results not only lead to better stability and robustness estimates but also provide simpler and sharper constants in the measurement conditions. Finally, we show how the problem of CS with block-diagonal sensing matrices can be viewed as a particular case of our multi-sensor framework. Specializing our results to this setting leads to a recovery guarantee that is at least as good as existing results.  相似文献   

11.
In transmission, storaging and coding of digital signals we frequently perform A/D conversion using quantization. In this paper we study the maximal and mean square errors as a result of quantization. We focus on the sigma–delta modulation quantization scheme in the finite frame expansion setting. We show that this problem is related to the classical Traveling Salesman Problem (TSP) in the Euclidean space. It is known [Benedetto et al., Sigma–delta () quantization and finite frames, IEEE Trans. Inform. Theory 52, 1990–2005 (2006)] that the error bounds from the sigma–delta scheme depends on the ordering of the frame elements. By examining a priori bounds for the Euclidean TSP we show that error bounds in the sigma–delta scheme is superior to those from the pulse code modulation (PCM) scheme in general. We also give a recursive algorithm for finding an ordering of the frame elements that will lead to good maximal error and mean square error. Supported in part by the National Science Foundation grant DMS-0139261.  相似文献   

12.
Owing to providing a novel insight for signal and image processing, compressed sensing (CS) has attracted increasing attention. The accuracy of the reconstruction algorithms plays an important role in real applications of the CS theory. In this paper, a generalized reconstruction model that simultaneously considers the inaccuracies on the measurement matrix and the measurement data is proposed for CS reconstruction. A generalized objective functional, which integrates the advantages of the least squares (LSs) estimation and the combinational M-estimation, is proposed. An iterative scheme that integrates the merits of the homotopy method and the artificial physics optimization (APO) algorithm is developed for solving the proposed objective functional. Numerical simulations are implemented to evaluate the feasibility and effectiveness of the proposed algorithm. For the cases simulated in this paper, the reconstruction accuracy is improved, which indicates that the proposed algorithm is successful in solving CS inverse problems.  相似文献   

13.
One‐bit quantization is a method of representing bandlimited signals by ±1 sequences that are computed from regularly spaced samples of these signals; as the sampling density λ → ∞, convolving these one‐bit sequences with appropriately chosen filters produces increasingly close approximations of the original signals. This method is widely used for analog‐to‐digital and digital‐to‐analog conversion, because it is less expensive and simpler to implement than the more familiar critical sampling followed by fine‐resolution quantization. However, unlike fine‐resolution quantization, the accuracy of one‐bit quantization is not well‐understood. A natural error lower bound that decreases like 2 can easily be given using information theoretic arguments. Yet, no one‐bit quantization algorithm was known with an error decay estimate even close to exponential decay. In this paper, we construct an infinite family of one‐bit sigma‐delta quantization schemes that achieves this goal. In particular, using this family, we prove that the error signal for π‐bandlimited signals is at most O(2?.07λ). © 2003 Wiley Periodicals, Inc.  相似文献   

14.
胡登洲  何兴 《应用数学和力学》2019,40(11):1270-1277
压缩感知(compressed sensing,CS)是一种全新的信号采样技术,对于稀疏信号,它能够以远小于传统的Nyquist采样定理的采样点来重构信号。在压缩感知中, 采用动态连续系统,对l1-l2范数的稀疏信号重构问题进行了研究。提出了一种基于固定时间梯度流的稀疏信号重构算法,证明了该算法在Lyapunov意义上的稳定性并且收敛于问题的最优解。最后通过与现有的投影神经网络算法的对比,体现了该算法的可行性以及在收敛速度上的优势.  相似文献   

15.
In this paper, we present a new algorithm to accelerate the Chambolle gradient projection method for total variation image restoration. The new proposed method considers an approximation of the Hessian based on the secant equation. Combined with the quasi‐Cauchy equations and diagonal updating, we can obtain a positive definite diagonal matrix. In the proposed minimization method model, we use the positive definite diagonal matrix instead of the constant time stepsize in Chambolle's method. The global convergence of the proposed scheme is proved. Some numerical results illustrate the efficiency of this method. Moreover, we also extend the quasi‐Newton diagonal updating method to solve nonlinear systems of monotone equations. Performance comparisons show that the proposed method is efficient. A practical application of the monotone equations is shown and tested on sparse signal reconstruction in compressed sensing. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
In signal quantization, it is well-known that introducing adaptivity to quantization schemes can improve their stability and accuracy in quantizing bandlimited signals. However, adaptive quantization has only been designed for one-dimensional signals. The contribution of this paper is two-fold: (i) we propose the first family of two-dimensional adaptive quantization schemes that maintain the same mathematical and practical merits as their one-dimensional counterparts, and (ii) we show that both the traditional 1-dimensional and the new 2-dimensional quantization schemes can effectively quantize signals with jump discontinuities, which immediately enable the usage of adaptive quantization on images. Under mild conditions, we show that by using adaptivity, the proposed method is able to reduce the quantization error of images from the presently best O P to the much smaller O s , where s is the number of jump discontinuities in the image and P ( P s) is the total number of samples. This P / s -fold error reduction is achieved via applying a total variation norm regularized decoder, whose formulation is inspired by the mathematical super-resolution theory in the field of compressed sensing. Compared to the super-resolution setting, our error reduction is achieved without requiring adjacent spikes/discontinuities to be well-separated, which ensures its broad scope of application. We numerically demonstrate the efficacy of the new scheme on medical and natural images. We observe that for images with small pixel intensity values, the new method can significantly increase image quality over the state-of-the-art method. © 2022 Wiley Periodicals, Inc.  相似文献   

17.
We discuss the trade-off between sampling and quantization in signal processing for the purpose of minimizing the error of the reconstructed signal subject to the constraint that the digitized signal fits in a given amount of memory. For signals with different regularities, we estimate the intrinsic errors from finite sampling and quantization, and determine the sampling and quantization resolutions.  相似文献   

18.
Non-oscillatory schemes are widely used in numerical approximations of nonlinear conservation laws. The Nessyahu–Tadmor (NT) scheme is an example of a second order scheme that is both robust and simple. In this paper, we prove a new stability property of the NT scheme based on the standard minmod reconstruction in the case of a scalar strictly convex conservation law. This property is similar to the One-sided Lipschitz condition for first order schemes. Using this new stability, we derive the convergence of the NT scheme to the exact entropy solution without imposing any nonhomogeneous limitations on the method. We also derive an error estimate for monotone initial data.  相似文献   

19.
Justin Romberg 《PAMM》2007,7(1):2010011-2010012
Several recent results in compressive sampling show that a sparse signal (i.e. a signal which can be compressed in a known orthobasis) can be efficiently acquired by taking linear measurements against random test functions. In this paper, we show that these results can be extended to measurements taken by convolving with a random pulse and then subsampling. The measurement scheme is universal in that it complements (with high probability) any fixed orthobasis we use to represent the signal. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

20.
迭代支撑探测算法是基于截断的基追踪(Basis Pursuit,BP)模型的一种l_1最小化信号重构算法,它可以实现信号的快速重构并且所需要的观测值比经典的L1算法以及迭代加权L1算法更少.本文针对非零元具有快速退化分布性质的稀疏信号,提出了一种改进算法一一基于截断的加权BP模型的迭代支撑探测算法.在迭代的过程中,改进的算法探测原信号支撑集中元素的同时调整重构模型的权值,使得重构模型更有利于实现信号的精确重构.根据所考虑的信号的非零元具有快速退化分布性质这样的先验信息,利用阈值法则探测原信号支撑集中的元素.最后通过Matlab数值实验实现了算法,验证了基于截断的加权BP模型的迭代支撑探测算法比迭代加权L1算法需要的观测值更少,并且比迭代加权L1算法以及传统的迭代支撑探测算法需要更少的重构时间就可以实现信号的精确重构.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号