首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We present in this paper a new distributed video coding (DVC) architecture for wireless capsule endoscopy. It is based on the state of the art DVC systems, but without using key frames. Instead, it uses an adapted vector quantization (VQ) with a searching complexity that is shifted to the decoder. VQ allows creating a good side information (SI) by exploiting the similarities in human anatomy. Thus, SI is created from a codebook (CB) rather than by motion compensated prediction. This approach decreases largely the complexity of the encoder, which codes only Wyner-Ziv frames, and allows a progressive decoding. The encoder of the proposed DVC generates only a simple hash that is used by the decoder to select the corresponding VQ codeword. The obtained experimental results show that rate-distortion results are better than those of JPEG, and show the possibility of using scalable coding to control the used rate and energy.  相似文献   

2.
We present simple color space transformations for lossless image compression and compare them with established transformations including RCT, YCoCg-R and with the optimal KLT for 3 sets of test images and for significantly different compression algorithms: JPEG-LS, JPEG2000 and JPEG XR. One of the transformations, RDgDb, which requires just 2 integer subtractions per image pixel, on average results in the best ratios for JPEG2000 and JPEG XR, while for a specific set or in case of JPEG-LS its compression ratios are either the best or within 0.1 bpp from the best. The overall best ratios were obtained with JPEG-LS and the modular-arithmetic variant of RDgDb (mRDgDb). Another transformation (LDgEb), based on analog transformations in human vision system, is with respect to complexity and average ratios better than RCT and YCoCg-R, although worse than RDgDb; for one of the sets it obtains the best ratios.  相似文献   

3.
Removing perceptual redundancy plays an important role in image compression. In this paper we develop a foveated just-noticeable-difference (FJND) model to quantify the perceptual redundancy in the image and integrate it in the H.265/HEVC intra encoding framework to provide a perceptually lossless image coding solution. Different to the conventional JND models, our proposed FJND model considers the relationship between contrast masking effect and the foveation properties of HVS. Furthermore, to achieving the perceptually lossless coding, the FJND model is integrated in the H.265/HEVC framework by determining the quantization parameter to ensure that the resulting distortion is no larger than the FJND threshold. The experiments demonstrate that the proposed method effectively improves the compression performance.  相似文献   

4.
In this paper, a novel method for lossless image encryption based on set partitioning in hierarchical trees and cellular automata. The proposed encryption method embeds the encryption into the compression process, in which a small part of the data is encrypted quickly, while maintaining the good coding characteristics of set partitioning in hierarchical trees (SPIHT). The proposed encryption system adopts three stages of scrambling and diffusion. In each stage of encryption, different chaotic systems are used to generate the plaintext-related key stream to maintain high security and to resist some attacks. Moreover, the channel length of the coded-and-compressed color image is more uncertain, resulting into higher difficulty for attackers to decipher the algorithm. The experimental results indicate that the length of bitstream is compressed to 50% of the original image, showing that our proposed algorithm has higher lossless compression ratio compared with the existing algorithms. Meanwhile, the encryption scheme passes the entropy analysis, sensitivity analysis, lossless recovery test, and SP800-22 test.  相似文献   

5.
Two major ISDN applications which will undoubtedly affect world-wide telecommunications in the coming decade are discussed. They are: (1) video transmission and (2) image transmission. Brief reviews of videophone chronicle and the current video coding technologies are presented. The application of videophones using p × 64 (CCITT coding algorithm up to 1·5 Mb/s) and the DCT (discrete cosine transform) algorithm for narrowband ISDN are discussed. Broadcast TV quality DS3-45 MB/s video codecs are also briefly discussed as a probable videophone system in the broadband ISDN era. The explosive growth of facsimile services is reviewed, and the progress of image coding technologies and their standards are covered. The prospects of high resolution image transfer systems with ISDN are addressed.  相似文献   

6.
基于帧间去相关的超光谱图像压缩方法   总被引:6,自引:1,他引:6  
针对超光谱图像的特点和硬件实现的实际需要,提出了一种基于小波变换的前向预测帧间去相关超光谱图像压缩算法。通过图像匹配和帧间去相关,消除超光谱图像帧间的冗余,对残差图像的压缩采用基于小波变换的快速位平面结合自适应算术编码的压缩算法,按照率失真准则控制输出码流,实现了对超光谱图像的高保真压缩。通过实验证明了该方案的有效性,基于小波变换的快速位平面结合自适应算术编码的压缩算法速度优于SPIHT,而且易于硬件实现。  相似文献   

7.
干涉光谱图像具有自身的特点,相邻谱线之间的相关性较弱,谱线数据也有自身的特征,主干涉区域数据变化剧烈,而其它区域的数据呈现单调变化的趋势。根据这些特点,该文提出一种数据区域分类方法对光谱数据进行分类处理,将一根谱线的数据分为主干涉区域与非主干涉区域两类,主干涉区域采用数据相似匹配进行描述,而对非主干涉区域采用二次曲线拟合方法进行数据分析,这种数据分析方法有利于提高该类图像编码效率。仿真结果表明,该方法可以降低无损压缩输出码率达0.2-0.4bpp,并且可以提高有损压缩压缩效率。  相似文献   

8.
为改善雾天图像对比度差、能见度低的特点,本文结合雾天成像模型和暗原色先验规律,在颜色空间的基础上提出了一种去雾新算法。首先,在RGB颜色空间,根据暗原色先验规律估计出空气光,然后将图像从RGB颜色空间转换到HSI和HSV颜色空间,再对HSV空间下的明度分量运用大气散射模型进行去雾处理,最后再对HSI空间下的饱和度分量进行校正,最终得到去雾之后的图像。通过该算法能得到清晰化的图像,并且该算法较之传统的单幅图像去雾方法,速度更快、效果更自然。  相似文献   

9.
Achieving a high embedding capacity and low compression rate with a reversible data hiding method in the vector quantization (VQ) compressed domain is a technically challenging problem. This paper proposes a novel reversible steganographic scheme for VQ compressed images based on a locally adaptive data compression method. The proposed method embeds n secret bits into one VQ index of an index table in Hilbert-curve scan order. The experimental results show that the proposed method can achieve the different average embedding rates of 0.99, 1.68, 2.28, and 3.04 bit per index (bpi) and average compression rates of 0.45, 0.46, 0.5, and 0.56 bit per pixel (bpp) for n = 1, 2, 3, and 4, respectively. These results indicate that the proposed scheme is superior to Chang et al.’s scheme 1 [19], Yang and Lin’s scheme [21], and Chang et al.’s scheme 2 [24].  相似文献   

10.
The compressed sensing (CS) theory has been successfully applied to image compression in the past few years as most image signals are sparse in a certain domain. In this paper, we focus on how to improve the sampling efficiency for CS-based image compression by using our proposed adaptive sampling mechanism on the block-based CS (BCS), especially the reweighted one. To achieve this goal, two solutions are developed at the sampling side and reconstruction side, respectively. The proposed sampling mechanism allocates the CS-measurements to image blocks according to the statistical information of each block so as to sample the image more efficiently. A generic allocation algorithm is developed to help assign CS-measurements and several allocation factors derived in the transform domain are used to control the overall allocation in both solutions. Experimental results demonstrate that our adaptive sampling scheme offers a very significant quality improvement as compared with traditional non-adaptive ones.  相似文献   

11.
电子封装常用名称及术语汇集下面,按英文字母顺序,汇集并解释了与目前LSI(包括IC)正在采用的主要封装形式相关联的名称术语等。这些名称术语参考并引用了日本国内12个半导体制造公司,其他国家7个半导体制造公司*与LSI封装相关的资料、日本电子机械工业会...  相似文献   

12.
Image compositing techniques are primarily utilized to achieve realistic composite results. Some existing image compositing methods, such as gradient domain and alpha matting, are widely used in the field of computer vision, and can typically achieve realistic results, especially for seamless boundaries. However, when the candidate composite images and the target images have obvious differences, such as color, texture and brightness, the composite results are unrealistic and inconsistent. At the same time, traditional compositing methods focus on basic feature matching, ignoring semantic rationality in composition processing. Quite a few compositing methods thus generate composite results without semantic rationality.In this paper, a new multi-scale image composition method has been presented. In the composition process, wavelet pyramid and basic feature handling were used to achieve multi-scale compositions. More importantly, a new criterion was established, based on the semantic rationality of images, which could ensure that the composite images are semantically valid. A large database was created to facilitate experimentation. The experiments showed that the methodology introduced in this paper produced superior results compared to traditional composition methods; the composite results were not only consistent and seamless, but were also semantically valid.  相似文献   

13.
With the rapid development of Internet technology, the copyright protection of color images has become more and more important. In order to fulfill this purpose, this paper designs a blind color digital image watermarking method based on image correction and eigenvalue decomposition (EVD). Firstly, all the eigenvalues of the pixel block in the color host image are obtained by EVD. Then, the sum of the absolute value of the eigenvalues is quantified by the variable quantization steps to embed the color watermark image that encrypted by affine transform and encoded by the Hamming code. If the watermarked image is processed by geometric attack, then the attacked image can be corrected by using the geometric attributes. Finally, the inverse embedding process is performed to extract the color watermark. Moreover, the advantages of the proposed method are shown as follows: (1) all Peak Signal-to-noise Ratio (PSNR) values are greater than 42 dB; (2) the average Structural Similarity Index Metric (SSIM) values are greater than 0.97; (3) the maximum embedded capacity is 0.25bpp; (4) whole running-time is less than 20 s; (5) the key space is more than 2450; (6) most Normalized Cross-correlation (NC) values are more than 0.9. Compared with the related methods, the experimental results show that the proposed method not only has better watermark invisibility and larger watermark capacity, but also has higher security and stronger robustness against geometric attacks.  相似文献   

14.
在分析了Shapiro的嵌入式零树小波编码算法的基础上,针对其效率的不足提出了一种改进方法。本算法通过识别重要子带,而不是EZW中的重要系数,大大减少了需要编码的零树的数量,因而节省了时间。在此改进的基础上,结合坐标数据压缩算法实现了对图像感兴趣区域的渐进编码。该算法可在接收到图像的全部编码数据之前,首先实现感光趣区域的高品质重建,且编码效率也有明显提高。  相似文献   

15.
一种新的基于对应像素距离度量的图像相关匹配方法   总被引:5,自引:2,他引:5  
传统的图像相关匹配方法中,由于实时图和参考图之间存在着灰度差异和一定程度的几何形变以及对目标的局部遮挡,使得利用求取对应像素灰度差累加和来进行相似性度量算法的性能很容易受到影响。文中从另一角度提出了一种新的图像相关匹配算法。该方法改变了原先匹配算法中求取模块图像和目标图像的像素灰度差的和的方法,而改为求取两幅图像之间相接近的点的个数,从而使匹配算法的稳定性大大提高,因为 局部出现的大片噪声点将不会影响匹配的结果,而这样的情况在传统的相关算法中将会显著影响匹配结果。实验结果表明了该方法的有效性。  相似文献   

16.
传统无损压缩算法对屏幕图像的压缩效果不佳。该文根据典型屏幕图像的特性,以LZ4HC(LZ4 High Compression)算法为具体实现基础,提出一种基于串匹配的高性能低复杂度(String Matching with High Performance and Low Complexity, SMHPLC) 的屏幕图像无损压缩算法。相对于传统字典编码无损压缩算法,新算法提出了以像素为搜索和匹配单位,对未匹配串长度、匹配串长度以及匹配偏移量这3个编码参数进行联合优化编码,并对参数进行映射编码。实验结果表明,SMHPLC具有高性能和低复杂度的综合优势,大幅降低编码复杂度,提高了编码效率。使用移动的文字和图形类的AVS2通用测试序列作为测试对象,对于YUV和RGB两种格式,SMHPLC算法比LZ4HC总体节省码率分别为22.4%,21.2%,同时编码复杂度降低分别为34.6%,46.8%。  相似文献   

17.
Many existing works related to lossy-to-lossless multiresolution image compression are based on the lifting concept. It is worth noting that a separable lifting scheme may not appear very efficient to cope with the 2D characteristics of edges which are neither horizontal nor vertical. In this paper, we propose to use 2D non-separable lifting schemes that still enable progressive reconstruction and exact decoding of images. Their relevant advantage is to yield a tractable optimization of all the involved decomposition operators. More precisely, we design the prediction operators by minimizing the variance of the detail coefficients. Concerning the update filters, we propose a new optimization criterion which aims at reducing the inherent aliasing artifacts. A theoretical analysis of the proposed method is conducted in terms of the adaptation criterion considered in the optimization of the update filter. Simulations carried out on still images and residual ones generated from stereo pairs show the benefits which can be drawn from the proposed optimization of the lifting operators.  相似文献   

18.
一种基于PCNN的图像去噪新方法   总被引:21,自引:1,他引:21  
该文深入研究了如何用一种有生物学依据的人工神经网络脉冲耦合神经网络(PCNNPulseCoupled Neural Network)进行二值图像去噪与图像平滑,并提出了基于PCNN的图像去噪算法。计算机仿真结果表明,使用PCNN可有效地恢复被噪声污染的二值图像,且恢复图像的信噪比增量高于用另两种常用的图像恢复方法(中值滤波与均值滤波)得到的结果。  相似文献   

19.
We propose a fast and efficient image retrieval system based on color and texture features. The color features are represented by color histograms and texture features are represented by block difference of inverse probabilities (BDIP) and block variation of local correlation coefficients (BVLC). It is observed that color features in combination with the texture features derived on the brightness component provides approximately similar results when color features are combined with the texture features using all three components of color, but with much less processing time. An analysis of various distance measures reveals that the square-chord distance measure outperforms the other prominent distance measures for the proposed method. Detailed experimental analysis is carried out using precision and recall on four datasets: Corel-5K, Corel-10K, UKbench and Holidays. The time analysis is also performed to compare processing speeds of the proposed method with the existing similar best methods.  相似文献   

20.
Software implementation costs of most algorithms, designed for image compression in wireless sensor networks, do not justify their use to reduce the energy consumption and delay transmission of images. Even though the hardware solution looks to be very attractive for this problem, a specific care should be paid when designing a low power algorithm for image compression and transmission over these systems. The aim of this paper is to present and evaluate a hardware implementation for user-driven image compression scheme designed to respect the energy constraints of image transmission over wireless sensor networks (WSNs). The proposed encoder will be considered as a co-processor for tasks related with image compression and data packetization. In this paper, we discuss both of the hardware architecture and the features of this encoder circuit when prototyped on FPGA (field-programmable gate array) and ASIC (application-specific integrated circuit) circuits.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号