首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Transform-based image codec follows the basic principle: the reconstructed quality is decided by the quantization level. Compressive sensing (CS) breaks the limit and states that sparse signals can be perfectly recovered from incomplete or even corrupted information by solving convex optimization. Under the same acquisition of images, if images are represented sparsely enough, they can be reconstructed more accurately by CS recovery than inverse transform. So, in this paper, we utilize a modified TV operator to enhance image sparse representation and reconstruction accuracy, and we acquire image information from transform coefficients corrupted by quantization noise. We can reconstruct the images by CS recovery instead of inverse transform. A CS-based JPEG decoding scheme is obtained and experimental results demonstrate that the proposed methods significantly improve the PSNR and visual quality of reconstructed images compared with original JPEG decoder.  相似文献   

2.
We propose the shape-adaptive Haar (SHAH) transform for images, which results in an orthonormal, adaptive decomposition of the image into Haar-wavelet-like components, arranged hierarchically according to decreasing importance, whose shapes reflect the features present in the image. The decomposition is as sparse as it can be for piecewise-constant images. It is performed via a stepwise bottom-up algorithm with quadratic computational complexity; however, nearly linear variants also exist. SHAH is rapidly invertible. We show how to use SHAH for image denoising. Having performed the SHAH transform, the coefficients are hard- or soft-thresholded, and the inverse transform taken. The SHAH image denoising algorithm compares favorably to the state of the art for piecewise-constant images. A clear asset of the methodology is its very general scope: it can be used with any images or more generally with any data that can be represented as graphs or networks.  相似文献   

3.
The problem of recovering a low-rank matrix from a set of observations corrupted with gross sparse error is known as the robust principal component analysis (RPCA) and has many applications in computer vision, image processing and web data ranking. It has been shown that under certain conditions, the solution to the NP-hard RPCA problem can be obtained by solving a convex optimization problem, namely the robust principal component pursuit (RPCP). Moreover, if the observed data matrix has also been corrupted by a dense noise matrix in addition to gross sparse error, then the stable principal component pursuit (SPCP) problem is solved to recover the low-rank matrix. In this paper, we develop efficient algorithms with provable iteration complexity bounds for solving RPCP and SPCP. Numerical results on problems with millions of variables and constraints such as foreground extraction from surveillance video, shadow and specularity removal from face images and video denoising from heavily corrupted data show that our algorithms are competitive to current state-of-the-art solvers for RPCP and SPCP in terms of accuracy and speed.  相似文献   

4.
运用小波变换进行图像压缩的算法其核心都是小波变换的多分辨率分析以及对不同尺度的小波系数的量化和编码 .本文提出了一种基于能量的自适应小波变换和矢量量化相结合的压缩算法 .即在一定的能量准则下 ,根据子图像的能量大小决定是否进行小波分解 ,然后给出恰当的小波系数量化 .在量化过程中 ,采用一种改进的LBG算法进行码书的训练 .实验表明 ,本算法广泛适用于不同特征的数字图像 ,在取得较高峰值信噪比的同时可以获得较高的重建图像质量 .  相似文献   

5.
Lin He  Ti-Chiun Chang  Stanley Osher  Tong Fang  Peter Speier 《PAMM》2007,7(1):1011207-1011208
Magnetic resonance imaging (MRI) reconstruction from sparsely sampled data has been a difficult problem in medical imaging field. We approach this problem by formulating a cost functional that includes a constraint term that is imposed by the raw measurement data in k-space and the L1 norm of a sparse representation of the reconstructed image. The sparse representation is usually realized by total variational regularization and/or wavelet transform. We have applied the Bregman iteration to minimize this functional to recover finer scales in our recent work. Here we propose nonlinear inverse scale space methods in addition to the iterative refinement procedure. Numerical results from the two methods are presented and it shows that the nonlinear inverse scale space method is a more efficient algorithm than the iterated refinement method. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

6.
Astronomical images are usually assumed to be corrupted by a space-invariant Point Spread Function and Poisson noise. In this paper we propose an original projected inexact Newton method for the solution of the constrained nonnegative minimization problem arising from image deblurring. The problem is ill-posed and the objective function must be regularized. The inner system is inexactly solved by few Conjugate Gradient iterations. The convergence of the method is proved and its efficiency is tested on simulated astronomical blurred images. The results show that the method produces good reconstructed images at low computational cost. Supported by the Italian MIUR Project Inverse Problems in Medicine and Astronomy 2006–2008.  相似文献   

7.
Recently, the 1-bit compressive sensing(1-bit CS) has been studied in the field of sparse signal recovery. Since the amplitude information of sparse signals in 1-bit CS is not available, it is often the support or the sign of a signal that can be exactly recovered with a decoding method. We first show that a necessary assumption(that has been overlooked in the literature) should be made for some existing theories and discussions for 1-bit CS. Without such an assumption, the found solution by some existing decoding algorithms might be inconsistent with 1-bit measurements. This motivates us to pursue a new direction to develop uniform and nonuniform recovery theories for 1-bit CS with a new decoding method which always generates a solution consistent with 1-bit measurements. We focus on an extreme case of 1-bit CS, in which the measurements capture only the sign of the product of a sensing matrix and a signal. We show that the 1-bit CS model can be reformulated equivalently as an ?_0-minimization problem with linear constraints. This reformulation naturally leads to a new linear-program-based decoding method, referred to as the 1-bit basis pursuit, which is remarkably different from existing formulations. It turns out that the uniqueness condition for the solution of the 1-bit basis pursuit yields the so-called restricted range space property(RRSP) of the transposed sensing matrix. This concept provides a basis to develop sign recovery conditions for sparse signals through 1-bit measurements. We prove that if the sign of a sparse signal can be exactly recovered from 1-bit measurements with 1-bit basis pursuit, then the sensing matrix must admit a certain RRSP, and that if the sensing matrix admits a slightly enhanced RRSP, then the sign of a k-sparse signal can be exactly recovered with 1-bit basis pursuit.  相似文献   

8.
Nowadays, still images are used everywhere in the digital world. The shortages of storage capacity and transmission bandwidth make efficient compression solutions essential. A revolutionary mathematics tool, wavelet transform, has already shown its power in image processing. MinImage, the major topic of this paper, is an application that compresses still images by wavelets. MinImage is used to compress grayscale images and true color images. It implements the wavelet transform to code standard BMP image files to LET wavelet image files, which is defined in MinImage. The code is written in C++ on the Microsoft Windows NT platform. This paper illustrates the design and implementation details in Min-Image according to the image compression stages. First, the preprocessor generates the wavelet transform blocks. Second, the basic wavelet decomposition is applied to transform the image data to the wavelet coefficients. The discrete wavelet transforms are the kernel component of MinImage and are discussed in detail. The different wavelet transforms can be plugged in to extend the functionality of MinImage. The third step is the quantization. The standard scalar quantization algorithm and the optimized quantization algorithm, as well as the dequantization, are described. The last part of MinImage is the entropy-coding schema. The reordering of the coefficients based on the Peano Curve and the different entropy coding methods are discussed. This paper also gives the specification of the wavelet compression parameters adjusted by the end user. The interface, parameter specification, and analysis of MinImage are shown in the final appendix.  相似文献   

9.
In this paper, we study robust quaternion matrix completion and provide a rigorous analysis for provable estimation of quaternion matrix from a random subset of their corrupted entries. In order to generalize the results from real matrix completion to quaternion matrix completion, we derive some new formulas to handle noncommutativity of quaternions. We solve a convex optimization problem, which minimizes a nuclear norm of quaternion matrix that is a convex surrogate for the quaternion matrix rank, and the ?1‐norm of sparse quaternion matrix entries. We show that, under incoherence conditions, a quaternion matrix can be recovered exactly with overwhelming probability, provided that its rank is sufficiently small and that the corrupted entries are sparsely located. The quaternion framework can be used to represent red, green, and blue channels of color images. The results of missing/noisy color image pixels as a robust quaternion matrix completion problem are given to show that the performance of the proposed approach is better than that of the testing methods, including image inpainting methods, the tensor‐based completion method, and the quaternion completion method using semidefinite programming.  相似文献   

10.
In this paper, we give an analytical model of the compression error of down-sampled compression based on wavelet transform, which explains why down-sampling before compression can improve coding performance. And we approximate the missing details due to down-sampling and compression by using the linear combination of a set of basis vectors with L1 norm. Then we propose a down-sampled and high frequency information approximated coding scheme and apply it to natural images, and achieve gains of both subjective quality and objective quality compared with JPEG2000.  相似文献   

11.
In image reconstruction algorithms, the choices of filter functions and interpolating functions are very important for the computational speed and the quality of the image reconstructed, especially, for fan-beam geometry, the occurrence of the singular integral operator may lead tosome great oscillations compared to the original image. In this paper we will give a direct convolu-tion algorithm which needs not the complex computations occuring in the Fourier transform, then using a circle integral we obtain a stable computational program. Different from all other previouswindow functions used by many pioneer researchers, in our algorithm we choose a window func tion similar to Gabor‘s window function e-x^2/2 , which can be regarded as the approximation to the inverse Fourier transform of a locally integrable frequency function. Also we point out that such reconstruction algorithm procedures can be used to deal with the SPECT projection data with constant attenuation.  相似文献   

12.
There exists close relation among chaos, coding and cryptography. All the three can be combined into a whole as aggregated chaos-based coding and cryptography (ATC) to compress and encrypt data simultaneously. In particular, image data own high redundancy and wide transmission and thereby it is well worth doing research on ATC for image, which is very helpful to real application.JPEG with high compression ratio has not provided security. If JPEG is incorporated into powerful cryptographic features, its application can be further extended. For this reason, in this paper, GLS coding as a special form of ATC, which attains synchronous compression and encryption, is used to modify JPEG and fill its gap. An image is first initialized using DCT, quantization and run-length coding in turn, just as JPEG. Then, it is encoded and encrypted simultaneously by utilizing GLS coding and binary keystream resulting from the chaotic generator. Results demonstrate that our scheme can not only achieve good compression performance but also resist known/chosen-plaintext attacks efficiently.  相似文献   

13.
We propose an image restoration method. The method generalizes image restoration algorithms that are based on the Moore–Penrose solution of certain matrix equations that define the linear motion blur. Our approach is based on the usage of least squares solutions of these matrix equations, wherein an arbitrary matrix of appropriate dimensions is included besides the Moore–Penrose inverse. In addition, the method is a useful tool for improving results obtained by other image restoration methods. Toward that direction, we investigate the case where the arbitrary matrix is replaced by the matrix obtained by the Haar basis reconstructed image. The method has been tested by reconstructing an image after the removal of blur caused by the uniform linear motion and filtering the noise that is corrupted with the image pixels. The quality of the restoration is observable by a human eye. Benefits of using the method are illustrated by the values of the improvement in signal‐to‐noise ratio and in the values of peak signal‐to‐noise ratio. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
The Radon transform is the mathematical foundation of Computerized Tomography[1](CT).Its important applications includes medical CT,noninvasive test and etc.If one is specially interested in the places at which the image function changed largely such as the interfaces of two different tissues,tissue and ill tissue and the interfaces of two difierent matters,and want to reconstruct the outlines of the interfaces,one should reconstruct the singularities of the image function.The exact inversion of the Radon transform is valid only for smooth function[2].The singularity places of the reconstructed function should be studied specially.The research includes the propagation and inversion of singularity of the Radon transform.If one use convolutionbackprojection method to reconstruct the image function,the reconstructed function become blurring at the singularity places of the original function.M.Jiang and etc[3]developed a blind deconvolution method deblurring reconstructed image.By[4]and following research,we see that one can use a neighborhood data of the singularities of the Radon transform to inverse the singularity of the Radon transform,and therefore the reconstruction is available for some incomplete data reconstructions.  相似文献   

15.
Sparsity-driven image recovery methods assume that images of interest can be sparsely approximated under some suitable system. As discontinuities of 2D images often show geometrical regularities along image edges with different orientations, an effective sparsifying system should have high orientation selectivity. There have been enduring efforts on constructing discrete frames and tight frames for improving the orientation selectivity of tensor product real-valued wavelet bases/frames. In this paper, we studied the general theory of discrete Gabor frames for finite signals, and constructed a class of discrete 2D Gabor frames with optimal orientation selectivity for sparse image approximation. Besides high orientation selectivity, the proposed multi-scale discrete 2D Gabor frames also allow us to simultaneously exploit sparsity prior of cartoon image regions in spatial domain and the sparsity prior of textural image regions in local frequency domain. Using a composite sparse image model, we showed the advantages of the proposed discrete Gabor frames over the existing wavelet frames in several image recovery experiments.  相似文献   

16.
压缩感知(compressed sensing,CS)是一种全新的信息采集与处理理论,它表明稀疏信号能够在远低于Shannon-Nyquist采样率的条件下被精确重构.现从压缩感知理论出发,对块稀疏信号重构算法进行研究,通过混合l2/lq(0相似文献   

17.
Electrical capacitance tomography (ECT) is considered as a promising process tomography (PT) technology, and its successful applications depend mainly on the precision and speed of the image reconstruction algorithms. In this paper, based on the wavelet multi-scale analysis method, an efficient image reconstruction algorithm is presented. The original inverse problem is decomposed into a sequence of inverse problems, which are solved successively from the largest scale to the smallest scale. At different scales, the inverse problem is solved by a generalized regularized total least squares (TLS) method, which is developed using a combinational minimax estimation method and an extended stabilizing functional, until the solution of the original inverse problem is found. The homotopy algorithm is employed to solve the objective functional. The proposed algorithm is tested by the noise-free capacitance data and the noise-contaminated capacitance data, and excellent numerical performances and satisfactory results are observed. In the cases considered in this paper, the reconstruction results show remarkable improvement in the accuracy. The spatial resolution of the reconstructed images by the proposed algorithm is enhanced and the artifacts in the reconstructed images can be eliminated effectively. As a result, a promising algorithm is introduced for ECT image reconstruction.  相似文献   

18.
Democracy in action: Quantization, saturation, and compressive sensing   总被引:2,自引:0,他引:2  
Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analog-to-digital converters and digital imagers in certain applications. To date, most of the CS literature has been devoted to studying the recovery of sparse signals from a small number of linear measurements. In this paper, we study more practical CS systems where the measurements are quantized to a finite number of bits; in such systems some of the measurements typically saturate, causing significant nonlinearity and potentially unbounded errors. We develop two general approaches to sparse signal recovery in the face of saturation error. The first approach merely rejects saturated measurements; the second approach factors them into a conventional CS recovery algorithm via convex consistency constraints. To prove that both approaches are capable of stable signal recovery, we exploit the heretofore relatively unexplored property that many CS measurement systems are democratic, in that each measurement carries roughly the same amount of information about the signal being acquired. A series of computational experiments indicate that the signal acquisition error is minimized when a significant fraction of the CS measurements is allowed to saturate (10–30% in our experiments). This challenges the conventional wisdom of both conventional sampling and CS.  相似文献   

19.
In this paper, we propose a new model for MR image reconstruction based on second order total variation ( \(\text {TV}^{2}\) ) regularization and wavelet, which can be considered as requiring the image to be sparse in both the spatial finite differences and wavelet transforms. Furthermore, by applying the variable splitting technique twice, augmented Lagrangian method and the Barzilai-Borwein step size selection scheme, an ADMM algorithm is designed to solve the proposed model. It reduces the reconstruction problem to several unconstrained minimization subproblems, which can be solved by shrinking operators and alternating minimization algorithms. The proposed algorithm needs not to solve a fourth-order PDE but to solve several second-order PDEs so as to improve calculation efficiency. Numerical results demonstrate the effectiveness of the presented algorithm and illustrate that the proposed model outperforms some reconstruction models in the quality of reconstructed images.  相似文献   

20.
Sigma Delta (\(\Sigma \Delta \)) quantization, a quantization method first surfaced in the 1960s, has now been widely adopted in various digital products such as cameras, cell phones, radars, etc. The method features a great robustness with respect to quantization noises through sampling an input signal at a Super-Nyquist rate. Compressed sensing (CS) is a frugal acquisition method that utilizes the sparsity structure of an objective signal to reduce the number of samples required for a lossless acquisition. By deeming this reduced number as an effective dimensionality of the set of sparse signals, one can define a relative oversampling/subsampling rate as the ratio between the actual sampling rate and the effective dimensionality. When recording these “compressed” analog measurements via Sigma Delta quantization, a natural question arises: will the signal reconstruction error previously shown to decay polynomially as the increase of the vanilla oversampling rate for the case of band-limited functions, now be decaying polynomially as that of the relative oversampling rate? Answering this question is one of the main goals in this direction. The study of quantization in CS has so far been limited to proving error convergence results for Gaussian and sub-Gaussian sensing matrices, as the number of bits and/or the number of samples grow to infinity. In this paper, we provide a first result for the more realistic Fourier sensing matrices. The main idea is to randomly permute the Fourier samples before feeding them into the quantizer. We show that the random permutation can effectively increase the low frequency power of the measurements, thus enhance the quality of \(\Sigma \Delta \) quantization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号