首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Wavelet coding of volumetric medical datasets   总被引:1,自引:0,他引:1  
Several techniques based on the three-dimensional (3-D) discrete cosine transform (DCT) have been proposed for volumetric data coding. These techniques fail to provide lossless coding coupled with quality and resolution scalability, which is a significant drawback for medical applications. This paper gives an overview of several state-of-the-art 3-D wavelet coders that do meet these requirements and proposes new compression methods exploiting the quadtree and block-based coding concepts, layered zero-coding principles, and context-based arithmetic coding. Additionally, a new 3-D DCT-based coding scheme is designed and used for benchmarking. The proposed wavelet-based coding algorithms produce embedded data streams that can be decoded up to the lossless level and support the desired set of functionality constraints. Moreover, objective and subjective quality evaluation on various medical volumetric datasets shows that the proposed algorithms provide competitive lossy and lossless compression results when compared with the state-of-the-art.  相似文献   

2.
In this work, we present a coding scheme based on a rate-distortion optimum wavelet packets decomposition and on an adaptive coding procedure that exploits spatial non-stationarity within each subband. We show, by means of a generalization of the concept of coding gain to the case of non-stationary signals, that it may be convenient to perform subband decomposition optimization in conjunction with intraband optimal bit allocation. In our implementation, each subband is partitioned into blocks of coefficients that are coded using a geometric vector quantizer with a rate determined on the basis of spatially local statistical characteristics. The proposed scheme appears to be simpler than other wavelet packets-based schemes presented in the literature and achieves good results in terms of both compression and visual quality.  相似文献   

3.
This paper proposes a new wavelet transform video coder which employs motion compensation, wavelet decomposition, and entropy-constrained vector quantization (ECVQ), in sequence. Each of layered subimages obtained from wavelet decomposition is segmented into basic blocks, and then the blocks are selectively encoded by ECVQ according to the energy of the samples. We introduce an efficient method to encode the map representing which blocks are encoded, based on inter-band prediction followed by a quadtree encoding. The proposed coder uses a simple forward analyzer in order to optimize the encoding parameters and introduces a preprocessing of signals which normalizes the input vectors of ECVQ in order to reduce the image-dependency of ECVQ codebooks. Simulation results show that our video coder provides good PSNR (peak-to-peak signal-to-noise ratio) performance and efficient rate control.  相似文献   

4.
We present a new segmentation method for extracting thin structures embedded in three-dimensional medical images based on modern variational principles. We demonstrate the importance of the edge alignment and homogeneity terms in the segmentation of blood vessels and vascular trees. For that goal, the Chan-Vese minimal variance method is combined with the boundary alignment, and the geodesic active surface models. An efficient numerical scheme is proposed. In order to simultaneously detect a number of different objects in the image, a hierarchal approach is applied.  相似文献   

5.
A combined-transform coding (CTC) scheme is proposed to reduce the blocking artifact of conventional block transform coding and hence to improve the subjective performance. The proposed CTC scheme is described and its information-theoretic properties are investigated. Computer simulation results for a class of chest X-ray images are presented. A comparison between the CTC scheme and the conventional discrete cosine transform (DCT) and discrete Walsh-Hadamard transform (DWHT) demonstrates the performance improvement of the proposed scheme. In addition, combined coding can also be used in noiseless coding, yielding a slight improvement in the compression performance if it is used properly.  相似文献   

6.
Digital watermarking can be used as data hiding technique to interleave medical images with patient information before transmitting and storing applications. While digital image watermarking and lossy compression methods have been widely studied, much less attention has been paid to their application in medical imaging situations, due partially to speculations on loss in viewer performance caused by degradation of image information. This article describes an hybrid data hiding/compression system, adapted to medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization, to JP3D encoder. The latter meets conformity condition, with respect to its antecedents JPEG2000 coders. Thus, the watermark embedding can be applied on two-dimensional as well as volumetric images. Results of our method applied to magnetic resonance and computed tomography medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can provide relative high data embedding rate whereas keep a relative lower distortion.  相似文献   

7.
图像的小波矩   总被引:2,自引:0,他引:2  
本文从一般特征矩的定义入手,导出极坐标系下图像的小波矩,进而给出对图像进行多分辨小波变换后,利用任一尺度下的逼近系数构造小波矩和基于小波模极大值边界图像构造小波矩两种方法.最后,经实验验证了本文构造图像小波矩方法的有效性.  相似文献   

8.
The enormous data of volumetric medical images (VMI) bring a transmission and storage problem that can be solved by using a compression technique. For the lossy compression of a very long VMI sequence, automatically maintaining the diagnosis features in reconstructed images is essential. The proposed wavelet-based adaptive vector quantizer incorporates a distortion-constrained codebook replenishment (DCCR) mechanism to meet a user-defined quality demand in peak signal-to-noise ratio. Combining a codebook updating strategy and the well-known set partitioning in hierarchical trees (SPIHT) technique, the DCCR mechanism provides an excellent coding gain. Experimental results show that the proposed approach is superior to the pure SPIHT and the JPEG2000 algorithms in terms of coding performance. We also propose an iterative fast searching algorithm to find the desired signal quality along an energy-quality curve instead of a traditional rate-distortion curve. The algorithm performs the quality control quickly, smoothly, and reliably.  相似文献   

9.
This paper presents a new lossy coding scheme based on 3D wavelet transform and lattice vector quantization for volumetric medical images. The main contribution of this work is the design of a new codebook enclosing a multidimensional dead zone during the quantization step which enables to better account correlations between neighbor voxels. Furthermore, we present an efficient rate–distortion model to simplify the bit allocation procedure for our intra-band scheme. Our algorithm has been evaluated on several CT- and MR-image volumes. At high compression ratios, we show that it can outperform the best existing methods in terms of rate–distortion trade-off. In addition, our method better preserves details and produces thus reconstructed images less blurred than the well-known 3D SPIHT algorithm which stands as a reference.  相似文献   

10.
分区域的医学图像高容量无损信息隐藏方法   总被引:1,自引:0,他引:1  
针对医学图像的分区域典型特征,提出一种基于区域和直方图平移的高容量无损信息隐藏方法。本方法用最大类间距分割法求得原始图像的前景区域,再用聚合多边形逼近和图像拟合法得到其数据嵌入区域。在数据嵌入过程中,提出利用差值直方图循环平移和基于编码的直方图平移方法分别在前景和背景区域嵌入数据,提高了原始直方图平移方法容量和解决了溢出问题。实验结果表明该方法总的嵌入容量可达1 bit/packet以上,并且隐秘图像质量在40dB左右,适用于具有区域特征的质量敏感图像的大容量信息隐藏。  相似文献   

11.
The authors present a new technique for coding gray-scale images for facsimile transmission and printing on a laser printer. They use a gray-scale image encoder so that it is only at the receiver that the image is converted to a binary pattern and printed. The conventional approach is to transmit the image in halftoned form, using entropy coding (e.g., CCITT Group 3 or JBIG). The main advantages of the new approach are that one can get higher compression rates and that the receiver can tune the halftoning process to the particular printer. They use a perceptually based subband coding approach. It uses a perceptual masking model that was empirically derived for printed images using a specific printer and halftoning technique. In particular, they used a 300 dots/inch write-black laser printer and a standard halftoning scheme ("classical") for that resolution. For nearly transparent coding of gray-scale images, the proposed technique requires lower rates than the standard facsimile techniques.  相似文献   

12.
We present a new technique for coding gray-scale images for facsimile transmission and printing on a laser printer. We use a gray-scale image encoder so that it is only at the receiver that the image is converted to a binary pattern and printed. The conventional approach is to transmit the image in halftoned form, using entropy coding (e.g. CCITT Group 3 or JBIG). The main advantages of the new approach are that we can get higher compression rates and that the receiver can tune the halftoning process to the particular printer. We use a perceptually based subband coding approach. It uses a perceptual masking model that was empirically derived for printed images using a specific printer and halftoning technique. In particular, we used a 300 dots/inch write-black laser printer and a standard halftoning scheme ("classical") for that resolution. For nearly transparent coding of gray-scale images, the proposed technique requires lower rates than the standard facsimile techniques.  相似文献   

13.
14.
Thomas  G. 《Electronics letters》1997,33(3):184-185
If input queued switches can be designed to maintain two or more separate queues per input line, with each queue associated with a subset of the output addresses, throughputs exceeding the well known limit of 58.6% due to head-of-line (HOL) blocking effects can be obtained. The switch complexity is only O(N), not O(N2) as in some recent proposals. The author proposes three switching rules and presents simulation results which indicate that all three do better than the performance of a familiar theoretical baseline model  相似文献   

15.
While the Network Coding cooperative relaying (NC-relaying) has the merit of high spectral ef-ficiency, Superposition Coding relaying (SC-relaying) has the merit of high throughput. In this paper, a novel concept, coded cooperative relaying, is presented, which is a unified scheme of the NC-relaying and SC-relaying. For the SC-relaying strategy which can be considered one-way coded relaying scheme with multi-access channel, the close-form solution of the outage probabilities of the basic signal and additional signal are obtained firstly. Secondly, the Diversity-and-Multiplexing Tradeoff (DMT) characteristics of ba-sic signal and additional signal are investigated entirely as well as the optimal close-form solutions. The compared numerical analysis shows the evaluation error of throughput based on the close-form solution is about 0.15 nats, which is within the acceptable error range. Due to the mutual effect between the both source signals, the available maximal values of the two multiplexing gains are less than 1.  相似文献   

16.
本文研究了利用network coding的多速率多播最大吞吐量问题.与以往研究重点集中在单速率多播中的network coding研究工作不同,本文考虑了链路的异构性问题并采用多速率多播来解决该问题.首先文中形式化地描述了多速率多播最大可得吞吐量问题,并证明了在分层独立和层速率固定条件下,利用network coding的多速率多播最大吞吐量问题是NP-hard类问题,同时给出了最大吞吐量的上界.此外本文同时也研究了分层相关和层速率可变情况下的最大吞吐量问题.  相似文献   

17.
This paper investigates the maximal achievable multi-rate throughput problem of a multicast session at the presence of network coding. Deviating from previous works which focus on single-rate network coding, our work takes the heterogeneity of sinks into account and provides multiple data layers to address the problem. Firstly formulated is the maximal achievable throughput problem with the assumption that the data layers are independent and layer rates are static. It is proved that the problem in this case is, unfortunately, Non-deterministic Polynomial-time (NP)-hard. In addition, our formulation is extended to the problems with dependent layers and dynamic layers. Furthermore, the approximation algorithm which satisfies certain fair- ness is proposed.  相似文献   

18.
A hybrid coding system that uses a combination of set partition in hierarchical trees (SPIHT) and vector quantisation (VQ) for image compression is presented. Here, the wavelet coefficients of the input image are rearranged to form the wavelet trees that are composed of the corresponding wavelet coefficients from all the subbands of the same orientation. A simple tree classifier has been proposed to group wavelet trees into two classes based on the amplitude distribution. Each class of wavelet trees is encoded using an appropriate procedure, specifically either SPIHT or VQ. Experimental results show that advantages obtained by combining the superior coding performance of VQ and efficient cross-subband prediction of SPIHT are appreciable for the compression task, especially for natural images with large portions of textures. For example, the proposed hybrid coding outperforms SPIHT by 0.38 dB in PSNR at 0.5 bpp for the Bridge image, and by 0.74 dB at 0.5 bpp for the Mandrill image.  相似文献   

19.
On average throughput and alphabet size in network coding   总被引:1,自引:0,他引:1  
We examine the throughput benefits that network coding offers with respect to the average throughput achievable by routing, where the average throughput refers to the average of the rates that the individual receivers experience. We relate these benefits to the integrality gap of a standard linear programming formulation for the directed Steiner tree problem. We describe families of configurations over which network coding at most doubles the average throughput, and analyze a class of directed graph configurations with N receivers where network coding offers benefits proportional to /spl radic/N. We also discuss other throughput measures in networks, and show how in certain classes of networks, average throughput bounds can be translated into minimum throughput bounds, by employing vector routing and channel coding. Finally, we show configurations where use of randomized coding may require an alphabet size exponentially larger than the minimum alphabet size required.  相似文献   

20.
A subband image codec is presented that approximately attains a user-prescribed fidelity by allowing the encoder's compression rate to vary. The fixed distortion subband coding (FDSBC) system is suitable for use with future of packet-switched networks. The codec's design is based on an algorithm that allocates distortion among the subbands to minimize channel entropy. By coupling this allocation procedure with judiciously selected subband quantizers, an elementary four-band codec was obtained. Additional four-band structures may be nested in a hierarchical configuration for improved performance. Each of the configurations tested attains mean square distortions within 2.0 dB of the user-specific value over a wide range of distortion for several standard test images. Rate versus mean-square distortion performance rivals that of fixed-rate systems having similar complexity. The encoder's output is formatted to take advantage of prioritized packet networks. Simulations show that FDSBC is robust with respect to packet loss and may be used effectively for progressive transmission applications  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号