首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 29 毫秒
1.
A distributed arithmetic coding algorithm based on source symbol purging and using the context model is proposed to solve the asymmetric Slepian–Wolf problem. The proposed scheme is to make better use of both the correlation between adjacent symbols in the source sequence and the correlation between the corresponding symbols of the source and the side information sequences to improve the coding performance of the source. Since the encoder purges a part of symbols from the source sequence, a shorter codeword length can be obtained. Those purged symbols are still used as the context of the subsequent symbols to be encoded. An improved calculation method for the posterior probability is also proposed based on the purging feature, such that the decoder can utilize the correlation within the source sequence to improve the decoding performance. In addition, this scheme achieves better error performance at the decoder by adding a forbidden symbol in the encoding process. The simulation results show that the encoding complexity and the minimum code rate required for lossless decoding are lower than that of the traditional distributed arithmetic coding. When the internal correlation strength of the source is strong, compared with other DSC schemes, the proposed scheme exhibits a better decoding performance under the same code rate.  相似文献   

2.
3.
This paper presents a mixed framework based on an efficient intra key frame coding and an improved side information (SI) generation scheme in transform domain Wyner–Ziv (WZ) video coding. The performance of the WZ video coding strongly depends on the quality of the SI. The SI can be generated from the decoded key frames resulted from intra key frame video coding. The better the decoded key frames are the better would be the SI generation. In this paper, a Burrows–Wheeler transform (BWT) based intra-frame video coding is proposed to generate improved decoded key frames. Furthermore, an improved SI generation scheme with multilayer perceptron (MLP) is proposed. Comparative analysis with other standard techniques of WZ video coding reveals that the proposed scheme has better standing as compared to its counterparts in terms of both coding efficiency and improved perceptual quality.  相似文献   

4.
Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Consultative Committee for Space Data Systems (CCSDS) has proposed an image data compression (CCSDS-IDC) algorithm which is so far most widely implemented in hardware. However, it cannot reduce spectral redundancy in mukispectral images. In this paper, we propose a low-complexity improved CCSDS-IDC (ICCSDS-IDC)-based distributed source coding (DSC) scheme for multispectral TDICCD image consisting of a few bands. Our scheme is based on an ICCSDS-IDC approach that uses a bit plane extractor to parse the differences in the original image and its wavelet transformed coefficient. The output of bit plane extractor will be encoded by a first order entropy coder. Low-density parity-check-based Slepian-Wolf (SW) coder is adopted to implement the DSC strategy. Experimental results on space multispectral TDICCD images show that the proposed scheme significantly outperforms the CCSDS-IDC-based coder in each band.  相似文献   

5.
无线视频传感阵列低复杂度多视点视频编码方案   总被引:1,自引:1,他引:0  
基于网络中心节点的运动矢量外推技术,提出了一种无线视频传感阵列的低复杂度多视点视频编码方法。该方法考虑到密集型视频传感阵列各视点间通信复杂、布线繁重、且位于各相机节点内的编码器由于计算能力、功耗等限制难以完成复杂的编码过程等特点,利用运动矢量外推逼近技术将大量的运动估计运算从视频编码端移到了网络中心节点,使得新编解码框架下编码器的运动估计的计算复杂度只有传统全搜索运动估计运算的0.3%,降低了系统传感阵列编码端功耗。实验结果表明,该方法的率失真性能比H.264-I帧高出4 dB以上,接近H.264-P帧编码,优于基于Wyner-Ziv理论的分布式多视点视频编码方法。  相似文献   

6.
Typical random codes (TRCs) in a communication scenario of source coding with side information in the decoder is the main subject of this work. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. In this code ensemble, the relatively small type classes of the source are deterministically partitioned into the available bins in a one-to-one manner. As a consequence, the error probability decreases dramatically. The random binning error exponent and the error exponent of the TRCs are derived and proved to be equal to one another in a few important special cases. We show that the performance under optimal decoding can be attained also by certain universal decoders, e.g., the stochastic likelihood decoder with an empirical entropy metric. Moreover, we discuss the trade-offs between the error exponent and the excess-rate exponent for the typical random semi-deterministic code and characterize its optimal rate function. We show that for any pair of correlated information sources, both error and excess-rate probabilities exponential vanish when the blocklength tends to infinity.  相似文献   

7.
分布式视频编码中边信息的质量决定了系统的率失真性能,边信息质量越高,则率失真性能越好。针对视频序列中对象运动的不均匀特性,结合MCTI技术,本文提出了一种新的边信息生成算法。其基本思想是在编码端利用多块模式算法对帧中宏块进行划分,将宏块分为运动缓慢块和运动剧烈块;在解码端,对运动缓慢块直接由MCTI算法生成边信息,而运动剧烈块的边信息要经过后处理进行优化得到。仿真实验表明与直接由MCTI生成边信息方法相比,本文算法可以使生成边信息的峰值信噪比(PSNR)比原有的算法提高0.8dB-1.2dB左右,有效提高了边信息的质量。  相似文献   

8.
孙中廷  华钢  徐永刚 《应用声学》2015,23(10):92-92
针对传统视频编码技术计算量大和复杂度高的缺点,提出一种基于双边信息的分布式视频压缩感知算法。该算法将压缩感知技术与分布式视频编码技术相结合,把视频序列分为Key帧和CS帧,Key帧运用传统的帧内编码和解码,CS帧编码端运用压缩感知编码,解码端运用视频块内与视频块间的双边信息和梯度投影算法进行优化重构。通过双边信息的运动估计和压缩编码器的设计,实现基于双边信息的分布式视频压缩感知模型的构建。仿真结果表明该模型既可以实现高效编码,又可以实现复杂度由编码端向解码端转移,在较低的采样率下,提高视频的压缩能力和传输速度。  相似文献   

9.
庄怀宇  吴成柯  李云松  刘凯 《光学学报》2005,25(11):477-1482
提出了一种基于优化截取内嵌码块编码(EBCOT)的感兴趣区域(ROD编码干涉多光谱图像压缩方法。小波变换后,对1级分解的高频系数感兴趣区域即包含光谱信息区域进行垂直方向的分解,再对感兴趣区域进行比特平面提升。T1编码器对不同比特平面的编码过程(Codingpass)赋予不同的重要性权值,由高到低依次编码,T2编码器根据所得的比特率自适应地反馈控制T1的编码深度,最后进行率失真优化截取。实验结果表明,该方法提高了恢复图像质量,有效地减少了优化截取内嵌码块编码算法的计算量和内存使用量(bpp-1时,测试图像的整体、感兴趣区域和背景区域平均峰值信噪比均提高0.1dB以上,计算量和内存使用量平均减少40%和60%以上),编码方式适合干涉多光谱图像压缩系统硬件实现。  相似文献   

10.
本文针对多视角视频编码提出了一种新的编码方法。在此方法中,结合四维Walsh操作算子,以达到压缩目的。利用4维n阶矩阵Walsh变换,对先前彩色视频流的编码加以扩展,将其应用到八个视角的视频编码中,包括视频序列分块,Walsh正变换及反变换,反分块。这种方法能够利用视频序列之间的相关性并且减少视频序列之间的冗余。本文以VC++6.0为工具,编程实现了基于快速Walsh变换的多视角视频编码,研究了不同压缩比条件下的压缩性能。通过对实验数据的分析,本文提出的方法既保证了视频质量又具有很好的快速压缩性能。实验结果表明:本文方法具有可行性及有效性,且易于在编码端快速实现,为多视角视频的进一步研究奠定了基础。  相似文献   

11.
Multiview video coding (MVC) is an efficient compression scheme, by which the large amount of multiview video data can be effectively processed. However, it ignores the characteristics of the human visual system. In this paper, we propose a multilevel region of interest (ROI) based bit allocation strategy for MVC, which can take advantage of the visual redundancy to improve encoding efficiency. First, the macro block (MB) saliency is derived from depth information of the video sequence, and the multilevel ROI segmentation is conducted based on the MB saliency distribution. Then, the multiview video bit allocation strategy based on multilevel ROI is proposed. We have evaluated system performance with several multiview video sequences in JMVC 8.5 reference software. Experimental results show that the quality of ROIs has a considerable improvement when the bit rate consumed is kept consistent with JMVC. Meanwhile, the proposed MVC method can save bit rate while maintaining the overall image quality.  相似文献   

12.
The setting of the measurement number for each block is very important for a block-based compressed sensing system. However, in practical applications, we only have the initial measurement results of the original signal on the sampling side instead of the original signal itself, therefore, we cannot directly allocate the appropriate measurement number for each block without the sparsity of the original signal. To solve this problem, we propose an adaptive block-based compressed video sensing scheme based on saliency detection and side information. According to the Johnson–Lindenstrauss lemma, we can use the initial measurement results to perform saliency detection and then obtain the saliency value for each block. Meanwhile, a side information frame which is an estimate of the current frame is generated on the reconstruction side by the proposed probability fusion model, and the significant coefficient proportion of each block is estimated through the side information frame. Both the saliency value and significant coefficient proportion can reflect the sparsity of the block. Finally, these two estimates of block sparsity are fused, so that we can simultaneously use intra-frame and inter-frame correlation for block sparsity estimation. Then the measurement number of each block can be allocated according to the fusion sparsity. Besides, we propose a global recovery model based on weighting, which can reduce the block effect of reconstructed frames. The experimental results show that, compared with existing schemes, the proposed scheme can achieve a significant improvement in peak signal-to-noise ratio (PSNR) at the same sampling rate.  相似文献   

13.
基于分布式信源编码的干涉多光谱图像压缩   总被引:2,自引:1,他引:1  
李云松  孔繁锵  吴成柯  雷杰 《光学学报》2008,28(8):1463-1468
根据干涉多光谱图像的特点.提出一种基于分布式信源编码的干涉多光谱图像压缩箅法.干涉多光谱图像序列的相邻图像之间具有明显的平移特性,编码端通过块匹配算法检测出相邻帧间的相对位移量,联合块匹配算法估计的边信息帧进行比特平面码率估计,采用基于率失真提升的感兴趣区域编码,调整图像不同区域的率失真斜率来进行更合理的码率分配.实验结果表明.该算法比传统算法更好地保护了多光谱图像的光谱信息,在不同压缩比的情况下.满足卫星干涉多光谱图像压缩系统要求.易于硬件实现,更适于星上环境的应用.  相似文献   

14.
This article addresses the problem of distributed lossless compression for hyperspectral images and proposes an effective lossless compression algorithm based on classification. First, a band selection algorithm was performed on the hyperspectral images to select those bands with considerable information. Next, the K-means algorithm was performed on those selected bands to obtain the classification map. To make full use of the spectral and spatial correlation, a multilinear regression model was introduced to construct the high-quality side information of each class within the identical block according to the classification map. Subsequently, the (n, k) linear grouping codes were employed to perform the distributed source coding for each class separately. The experimental results showed that the proposed algorithm has a competitive lossless compression performance compared with other state-of-the-art algorithms.  相似文献   

15.
基于步进啁啾光纤光栅的OCDMA频阈相位编码   总被引:2,自引:2,他引:0  
本文提出了一种在步进啁啾光纤光栅实现OCDMA频阈相位编码的方案,该方案中引入了影射码,根据影射码在相应的子光栅之间加入相移以实现正确的编解码.该编解码器结构简单,数值模拟得到了好的相关输出.  相似文献   

16.
In our previous work, by combining the Hilbert scan with the symbol grouping method, efficient run-length-based entropy coding was developed, and high-efficiency image compression algorithms based on the entropy coding were obtained. However, the 2-D Hilbert curves, which are a critical part of the above-mentioned entropy coding, are defined on squares with the side length being the powers of 2, i.e., 2n, while a subband is normally a rectangle of arbitrary sizes. It is not straightforward to modify the Hilbert curve from squares of side lengths of 2n to an arbitrary rectangle. In this short article, we provide the details of constructing the modified 2-D Hilbert curve of arbitrary rectangle sizes. Furthermore, we extend the method from a 2-D rectangle to a 3-D cuboid. The 3-D modified Hilbert curves are used in a novel 3-D transform video compression algorithm that employs the run-length-based entropy coding. Additionally, the modified 2-D and 3-D Hilbert curves introduced in this short article could be useful for some unknown applications in the future.  相似文献   

17.
This article reports the design and implementation of a graphical display that presents an approximation to vocal tract area in real time for voiced vowel articulation. The acoustic signal is digitally sampled by the system. From these data a set of reflection coefficients is derived using linear predictive coding. A matrix of area coefficients is then determined that approximates the vocal tract area of the user. From this information a graphical display is then generated. The complete cycle of analysis and display is repeated at ≈20 times/s. Synchronised audio and visual sequences can be recorded and used as dynamic targets for articulatory development. Use of the system is illustrated by diagrams of system output for spoken cardinal vowels and for vowels sung in a trained and untrained style  相似文献   

18.
基于高阶残差量化的光谱二值编码新方法   总被引:1,自引:0,他引:1  
光谱二值和多值编码技术能够实现目标光谱的快速匹配、识别和分类等应用,但这类量化编码方法会损失大量的光谱细节信息,且不能解码出与原始光谱近似的重构光谱,应用有限。为了解决上述问题,提出一种高阶残差量化的光谱编码新方法HOBC(high-order binary coding)。首先,对光谱向量进行去均值的规范化处理,得到值域为(-1, 1)的光谱序列;然后,求解规范化光谱的±1编码、编码系数和残差(即一阶残差);基于一阶残差,逐阶解算2至K阶残差的±1编码及其系数;最后得到K个编码序列及其系数,即为HOBC的编码结果。选择典型波谱库数据集,对比光谱0/1二值编码BC01(binary coding with 0 and 1)、光谱分析编码SPAM(spectral analysis manager)、二值/四值混合编码SDFC(spectral derivative feature coding)和DNA四值编码等4种方法,进行了光谱量化编码和解码重构实验,分别统计了光谱形状特征和斜率特征编码的信息熵和存储量、光谱形状特征编码与原始光谱之间的光谱矢量距离SVD (spectral vector distance)、谱间Pearson相关系数SCC (spectral correlation coefficient)和光谱角SAM (spectral angle mapping)。结果表明,在编码存储量上,HOBC的1~4阶编码分别与以上4种编码相等;在编码信息熵上,HOBC的1~2阶编码分别与BC01和SPAM相等,而HOBC的3~4阶编码分别高于SDFC和DNA编码;在SCC上,HOBC1阶编码与BC01相等,而2~4阶编码均分别优于SPAM,SDFC和DNA编码;在SAM方面,HOBC 1~4阶编码均分别明显优于4种对比方法;4种对比方法不能明确解码重构,而HOBC可简便重构出与原始光谱近似的解码序列,且SVD逐阶递减。进一步,基于临泽草地试验站公开光谱数据集,进行了10类地物目标的光谱编码和监督分类实验,实验结果表明,在Kappa系数,总体分类精度和平均分类精度等3种性能评价指标上,HOBC均明显优于4种对比方法,尤其是,HOBC 4阶编码优于原始光谱的分类性能;对样本数量较少且类间相似性较高的难分类地物,HOBC亦具有优于其他算法的鲁棒性。说明HOBC编码在大幅压缩数据量的同时,其编码序列能保留较高的信息量,且具有较高的光谱可分性,可用于光谱高精度快速识别和分类;其解码重构序列与原始光谱序列具有较高的相似性,理论上可适用于目标识别和分类等应用。  相似文献   

19.
霍炎  荆涛  李生红 《物理学报》2010,59(2):859-866
对视频序列离散余弦变换(DCT)系数统计分布进行了分析,提出了一种基于Weibull概率密度的视频DCT系数统计模型,该模型较Laplace概率密度与Cauchy概率密度更好地描述了视频序列的DCT特征.随后以该统计模型为基础,依据熵编码的理论分别给出了视频序列的速率-量化关系和失真-量化关系,并根据实际视频序列特性对其进行合理地简化近似,得到一种新的较为精确的视频序列率失真模型.经大量仿真实验证明,文中提出的基于Weibull分布的率失真模型对于Intra帧和Inter帧两种编码的真实视频序列率失真特性都具有较好的描述.  相似文献   

20.
We present a new decentralized classification system based on a distributed architecture. This system consists of distributed nodes, each possessing their own datasets and computing modules, along with a centralized server, which provides probes to classification and aggregates the responses of nodes for a final decision. Each node, with access to its own training dataset of a given class, is trained based on an auto-encoder system consisting of a fixed data-independent encoder, a pre-trained quantizer and a class-dependent decoder. Hence, these auto-encoders are highly dependent on the class probability distribution for which the reconstruction distortion is minimized. Alternatively, when an encoding–quantizing–decoding node observes data from different distributions, unseen at training, there is a mismatch, and such a decoding is not optimal, leading to a significant increase of the reconstruction distortion. The final classification is performed at the centralized classifier that votes for the class with the minimum reconstruction distortion. In addition to the system applicability for applications facing big-data communication problems and or requiring private classification, the above distributed scheme creates a theoretical bridge to the information bottleneck principle. The proposed system demonstrates a very promising performance on basic datasets such as MNIST and FasionMNIST.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号