首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In order to effectively improve the quality of side information in distributed video coding, we propose a side information generation scheme based on a coefficient matrix improvement model. The discrete cosine transform coefficient bands of the Wyner–Ziv frame at the encoder side are divided into entropy coding coefficient bands and distributed video coding coefficient bands, and then the coefficients of entropy coding coefficient bands are sampled, which are divided into sampled coefficients and unsampled coefficients. For sampled coefficients, an adaptive arithmetic encoder is used for lossless compression. For unsampled coefficients and the coefficients of distributed video coding coefficient bands, the low density parity check accumulate encoder is used to calculate the parity bits, which are stored in the buffer and transmitted in small amount upon decoder request. At the decoder side, the optical flow method is used to generate the initial side information, and the initial side information is improved according to the sampled coefficients by using the coefficient matrix improvement model. The experimental results demonstrate that the proposed side information generation scheme based on the coefficient matrix improvement model can effectively improve the quality of side information, and the quality of the generated side information is improved by about 0.2–0.4 dB, thereby improving the overall performance of the distributed video coding system.  相似文献   

2.
分析了光码分多址(OCDMA)系统中超结构光纤光栅(SSFBG)编解码器的相关特性, 考虑了输入脉冲宽度、SSFBG编解码光栅之间的波长偏移以及光栅的折射率调制振幅对全光编解码性能的影响。结果表明, 随着输入脉冲宽度和编解码光栅之间的波长偏移量的增加, 自相关峰值旁瓣比和自互相关峰值比下降, 即编解码性能出现下降; 编解码器的插入损耗和相关性能间存在矛盾, 需要折中考虑SSFBG折射率调制振幅的选取。建立了基于SSFBG编解码器的时域相位编码OCDMA系统的数学模型, 考虑了差拍噪声、多址干扰、接收机噪声以及接收机的带宽限制对系统性能的影响, 采用全光阈值技术和turbo编码来提高相干扩时OCDMA的系统性能。  相似文献   

3.
李恒建  张家树 《中国物理 B》2010,19(5):50508-050508
In this study an adaptive arithmetic coder is embedded in the Baptista-type chaotic cryptosystem for implementing secure data compression. To build the multiple lookup tables of secure data compression, the phase space of chaos map with a uniform distribution in the search mode is divided non-uniformly according to the dynamic probability estimation of plaintext symbols. As a result, more probable symbols are selected according to the local statistical characters of plaintext and the required number of iterations is small since the more probable symbols have a higher chance to be visited by the chaotic search trajectory. By exploiting non-uniformity in the probabilities under which a number of iteration to be coded takes on its possible values, the compression capability is achieved by adaptive arithmetic code. Therefore, the system offers both compression and security. Compared with original arithmetic coding, simulation results on Calgary Corpus files show that the proposed scheme suffers from a reduction in compression performance less than 12{\%} and is not susceptible to previously carried out attacks on arithmetic coding algorithms.  相似文献   

4.
5.
林秋镇  黄国和  陈剑勇 《中国物理 B》2011,20(7):70501-070501
An efficient chaotic source coding scheme operating on variable-length blocks is proposed.With the source message represented by a trajectory in the state space of a chaotic system,data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols.For infinite-precision computation,the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding.In finite-precision implementation,it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length.In the decoding process,the bit shift in the register can track the synchronization of the initial value and the corresponding block.Therefore,all the variable-length blocks are decoded correctly.Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding.  相似文献   

6.
A novel time-varying channel adaptive low-complexity chase (LCC) algorithm with low redundancy is proposed, where only the necessary number of test vectors (TVs) are generated and key equations are calculated according to the channel evaluation to reduce the decoding complexity. The algorithm evaluates the error symbol numbers by counting the number of unreliable bits of the received code sequence and dynamically adjusts the decoding parameters, which can reduce a large number of redundant calculations in the decoding process. We provide a simplified multiplicity assignment (MA) scheme and its architecture. Moreover, a multi-functional block that can implement polynomial selection, Chien search and the Forney algorithm (PCF) is provided. On this basis, a high-efficiency LCC decoder with adaptive error-correcting capability is proposed. Compared with the state-of-the-art LCC (TV = 16) decoding, the number of TVs of our decoder was reduced by 50.4% without loss of the frame error rate (FER) performance. The hardware implementation results show that the proposed decoder achieved 81.6% reduced average latency and 150% increased throughput compared to the state-of-the-art LCC decoder.  相似文献   

7.
在ADSP-BF561上实现了低密度奇偶校验码的编码算法。为解决编码复杂度高的问题,将校验矩阵变换后存入数据空间。为节省存储空间,采用压缩算法,并提出相应的编码算法,没有增加复杂度。对不同码率的低密度奇偶校验码在不同信道中的性能进行仿真分析。将不同码率的低密度奇偶校验码应用于水声通信系统进行浅海试验,结果表明低密度奇偶校验码能提高通信系统的鲁棒性,码率越低性能越好。当解码前符号信噪比在7~8dB时,可达到近似无误码的通信性能。  相似文献   

8.
In complex network environments, there always exist heterogeneous devices with different computational powers. In this work, we propose a novel scalable random linear network coding (RLNC) framework based on embedded fields, so as to endow heterogeneous receivers with different decoding capabilities. In this framework, the source linearly combines the original packets over embedded fields based on a precoding matrix and then encodes the precoded packets over GF(2) before transmission to the network. After justifying the arithmetic compatibility over different finite fields in the encoding process, we derive a sufficient and necessary condition for decodability over different fields. Moreover, we theoretically study the construction of an optimal precoding matrix in terms of decodability. The numerical analysis in classical wireless broadcast networks illustrates that the proposed scalable RLNC not only guarantees a better decoding compatibility over different fields compared with classical RLNC over a single field, but also outperforms Fulcrum RLNC in terms of a better decoding performance over GF(2). Moreover, we take the sparsity of the received binary coding vector into consideration, and demonstrate that for a large enough batch size, this sparsity does not affect the completion delay performance much in a wireless broadcast network.  相似文献   

9.
Based on Galois Field (GF(q)) multiplicative group, a new coding scheme for Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) codes is proposed, and the new coding scheme has some advantages such as the simpler construction, the easier implementation encoding, the lower complexity of the encoding and decoding, the more flexible adjustment of the code length as well as the code rate and so forth. Under the condition of considering the characteristics of optical transmission systems, an irregular QC-LDPC (3843,3603) code to be suitable for optical transmission systems is constructed by applying the proposed new coding scheme. The simulation result shows that the net coding gain (NCG) of the irregular QC-LDPC (3843,3603) code is respectively improved 2.14 dB, 1.19 dB, 0.24 dB and 0.14 dB more than those of the classic RS (255,239) code in ITU-T G.975, the LDPC (32640,30592) code in ITU-T G.975.1, the regular SCG-LDPC (3969,3720) code constructed by the Systematically Constructed Gallager (SCG) coding scheme and the regular QC-LDPC (4221,3956) code at the bit error rate (BER) of 10-8. Furthermore, all the five codes have the same code rate of 93.7%. Therefore, the irregular QC-LDPC (3843,3603) code constructed by the proposed new coding scheme has the more excellent error-correction performance and can be better suitable for optical transmission systems.  相似文献   

10.
Typical random codes (TRCs) in a communication scenario of source coding with side information in the decoder is the main subject of this work. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. In this code ensemble, the relatively small type classes of the source are deterministically partitioned into the available bins in a one-to-one manner. As a consequence, the error probability decreases dramatically. The random binning error exponent and the error exponent of the TRCs are derived and proved to be equal to one another in a few important special cases. We show that the performance under optimal decoding can be attained also by certain universal decoders, e.g., the stochastic likelihood decoder with an empirical entropy metric. Moreover, we discuss the trade-offs between the error exponent and the excess-rate exponent for the typical random semi-deterministic code and characterize its optimal rate function. We show that for any pair of correlated information sources, both error and excess-rate probabilities exponential vanish when the blocklength tends to infinity.  相似文献   

11.
Orthogonality is a much desired property for MIMO coding. It enables symbol-wise decoding, where the errors in other symbol estimates do not affect the result, thus providing an optimality that is worth pursuing. It also paves the way for low complexity soft decision decoding, which for orthogonal complex MIMO codes is known for two transmit (Tx) antennas, i.e. for the Alamouti code. We propose novel soft decision decoders for the orthogonal complex MIMO codes on three and four Tx antennas and extend the old result of maximal ratio combining (MRC) to cover all orthogonal codes up to four Tx antennas.As a rule, a sophisticated transmission scheme encompasses forward error correction (FEC) coding, and its performance is measured at the FEC decoder instead of at the MIMO decoder. We introduce the receiver structure that delivers the MIMO decoder’s soft decisions to the demodulator, which in turn cranks out the logarithm of likelihood ratio (LLR) of each bit and delivers them to the FEC decoder. This significantly improves the receiver, where a maximum likelihood (ML) MIMO decoder makes hard decisions at a too early stage. Further, the additional gain is achieved with stunningly low complexity.  相似文献   

12.
The existing physical layer security technology based on fountain codes needs to ensure that the legal channel is superior to the eavesdropping channel; when the quality of the legal channel and the eavesdropping channel are close, the information security cannot be guaranteed. Aiming at this problem, this paper proposes a shifted Luby transform (SLT) code security scheme for partial information encryption, which is mainly divided into two stages, partial information encryption transfer and degree distribution adjustment. The main idea is that the source randomly extracts part of the information symbols, and performs XOR encryption with the random sequence containing the main channel noise sent by the legitimate receiver. Afterward, the degree distribution is adjusted using the number of transfer information symbols received by the legitimate receiver to improve the average degree of the encoded codewords. Since the eavesdropper can only obtain fewer information symbols in the initial stage, it is difficult to decode the generated coded symbols after the degree distribution adjustment, thereby ensuring the safe transmission of information. The experimental results show that, compared with other LT anti-eavesdropping schemes, even if the legitimate channel is not dominant, the proposed scheme still has better security performance and less decoding overhead.  相似文献   

13.
Belief propagation (BP) decoding for polar codes has been extensively studied because of its inherent parallelism. However, its performance remains inferior to that of successive cancellation list decoding (SCL) due to the structure of the decoding graph. To improve the block error rate (BLER) performance, the BP correction (BPC) decoding, a post-processing scheme that corrects prior knowledge of the identified code bit, improves convergence by executing additional iterations on the failed BP decoder. Moreover, the BPC decoder demonstrates a better decoding performance than the BP-based bit-flipping decoder. Nevertheless, the additional decoding attempts lead to increased latency. In this article, a modified BPC decoder is proposed to reduce the number of decoding attempts by redefining the correction rules. A new metric is designed to effectively identify the corrected location. Numerical results show that the proposed modified BPC decoder achieves a slight improvement in BLER compared with the original BPC, with a dramatic reduction in average complexity. Furthermore, a higher-order version, named MBPC-Ω, is extended to further improve the performance, where the Ω is the maximum correction order. Numerical results show that the higher-order modified BPC achieves a similar BLER performance to existing multiple bit-flipping BP decoders but has around half the latency overhead. In addition, the proposed MBPC-2 decoder performs better than the cyclic redundancy check-aided SCL (CA-SCL) decoder with list size 4 and is slightly worse than the CA-SCL with list size 8 in high signal-to-noise ratio (SNR) regions but with significant decoding latency reduction.  相似文献   

14.
Jianguo Yuan  Wenwei Ye 《Optik》2009,120(15):758-764
A novel super forward error correction (SFEC) coding scheme, based on the block turbo code (BTC) of Bose–Chaudhuri–Hocguenghem (BCH)(64,57)×BCH(64,57), in high-speed long-haul dense wavelength division multiplexing (DWDM) optical communication systems is proposed. The simulation results and its analyses show that the net coding gain (NCG) of the novel SFEC code at iteration 6 is, respectively, 0.31 and 0.34 dB more than those of the BCH(3860, 3824)+BCH(2040, 1930) code and Reed–Solomon (RS)(255,239)+convolutional–self-orthogonal code (CSOC)(k0/n0=6/7, J=8) code in the Recommendation of ITU-T G.975.1 at iteration 3 for the bit error rate (BER) of 10−12. The performance analyses for the novel SFEC code show that it has excellent advantages such as the shorter component code and rapid encoding/decoding speed; thus, both the complexity to implement its software/hardware and the delay time for its encoding/decoding can be greatly reduced. As a result, the novel SFEC coding scheme can better be applicable in high-speed long-haul DWDM optical communication systems. In addition, the design and implementation of the novel BTC are also analyzed and probed.  相似文献   

15.
基于高阶残差量化的光谱二值编码新方法   总被引:1,自引:0,他引:1  
光谱二值和多值编码技术能够实现目标光谱的快速匹配、识别和分类等应用,但这类量化编码方法会损失大量的光谱细节信息,且不能解码出与原始光谱近似的重构光谱,应用有限。为了解决上述问题,提出一种高阶残差量化的光谱编码新方法HOBC(high-order binary coding)。首先,对光谱向量进行去均值的规范化处理,得到值域为(-1, 1)的光谱序列;然后,求解规范化光谱的±1编码、编码系数和残差(即一阶残差);基于一阶残差,逐阶解算2至K阶残差的±1编码及其系数;最后得到K个编码序列及其系数,即为HOBC的编码结果。选择典型波谱库数据集,对比光谱0/1二值编码BC01(binary coding with 0 and 1)、光谱分析编码SPAM(spectral analysis manager)、二值/四值混合编码SDFC(spectral derivative feature coding)和DNA四值编码等4种方法,进行了光谱量化编码和解码重构实验,分别统计了光谱形状特征和斜率特征编码的信息熵和存储量、光谱形状特征编码与原始光谱之间的光谱矢量距离SVD (spectral vector distance)、谱间Pearson相关系数SCC (spectral correlation coefficient)和光谱角SAM (spectral angle mapping)。结果表明,在编码存储量上,HOBC的1~4阶编码分别与以上4种编码相等;在编码信息熵上,HOBC的1~2阶编码分别与BC01和SPAM相等,而HOBC的3~4阶编码分别高于SDFC和DNA编码;在SCC上,HOBC1阶编码与BC01相等,而2~4阶编码均分别优于SPAM,SDFC和DNA编码;在SAM方面,HOBC 1~4阶编码均分别明显优于4种对比方法;4种对比方法不能明确解码重构,而HOBC可简便重构出与原始光谱近似的解码序列,且SVD逐阶递减。进一步,基于临泽草地试验站公开光谱数据集,进行了10类地物目标的光谱编码和监督分类实验,实验结果表明,在Kappa系数,总体分类精度和平均分类精度等3种性能评价指标上,HOBC均明显优于4种对比方法,尤其是,HOBC 4阶编码优于原始光谱的分类性能;对样本数量较少且类间相似性较高的难分类地物,HOBC亦具有优于其他算法的鲁棒性。说明HOBC编码在大幅压缩数据量的同时,其编码序列能保留较高的信息量,且具有较高的光谱可分性,可用于光谱高精度快速识别和分类;其解码重构序列与原始光谱序列具有较高的相似性,理论上可适用于目标识别和分类等应用。  相似文献   

16.
The efficient coding hypothesis states that neural response should maximize its information about the external input. Theoretical studies focus on optimal response in single neuron and population code in networks with weak pairwise interactions. However, more biological settings with asymmetric connectivity and the encoding for dynamical stimuli have not been well-characterized. Here, we study the collective response in a kinetic Ising model that encodes the dynamic input. We apply gradient-based method and mean-field approximation to reconstruct networks given the neural code that encodes dynamic input patterns. We measure network asymmetry, decoding performance, and entropy production from networks that generate optimal population code. We analyze how stimulus correlation, time scale, and reliability of the network affect optimal encoding networks. Specifically, we find network dynamics altered by statistics of the dynamic input, identify stimulus encoding strategies, and show optimal effective temperature in the asymmetric networks. We further discuss how this approach connects to the Bayesian framework and continuous recurrent neural networks. Together, these results bridge concepts of nonequilibrium physics with the analyses of dynamics and coding in networks.  相似文献   

17.
DS-OCDMA系统中FBG编解码器的研究   总被引:1,自引:0,他引:1  
丁美玲  章献民  陈抗生 《光子学报》2001,30(8):998-1002
详细阐述了平行结构和串联结构光纤布喇格光栅编解码器的设计思想,探讨了它们在直接序列扩频光纤码分多址系统中的应用,并推导了各自的编解码原理.通过分析相应系统在功率效率和误码率等方面的性能,得出了最佳的设计参量.结果表明,该编解码器除具有与同一结构类型的光纤延迟线编解码器相似的性能外,还具有相位编解码能力.  相似文献   

18.
基于超结构光纤光栅的正交四相光码分多址编/解码器   总被引:3,自引:2,他引:1  
正交四相序列编码相比于二相序列具有更大的码字容量和更好的互相关特性,因此更适用于光码分多址(OCDMA)无源光接入网.提出并实现了一种基于超结构光纤光栅的正交四相光码分多址编/解码器.该编/解码器采用A族四相序列作为地址码,在制作的过程中仅需一个均匀相位掩模板即可实现编码功能,并且在性能上与传统工艺制作的编码器相当.为了与不同波长信道相匹配,提出了变信道编码技术,仿真结果表明采用该技术的编码器具有更高频谱效率,因此得到更好的编/解码性能.对一个码长63,长度4.1 cm的正交四相编/解码器进行了信息速率为2.5 Gb/s.码片速率为156 Gchip/s的编/解码实验,取得了较好的编/解码效果.  相似文献   

19.
Although long polar codes with successive cancellation decoding can asymptotically achieve channel capacity, the performance of short blocklength polar codes is far from optimal. Recently, Arıkan proposed employing a convolutional pre-transformation before the polarization network, called polarization-adjusted convolutional (PAC) codes. In this paper, we focus on improving the performance of short PAC codes concatenated with a cyclic redundancy check (CRC) outer code, CRC-PAC codes, since error detection capability is essential in practical applications, such as the polar coding scheme for the control channel. We propose an enhanced adaptive belief propagation (ABP) decoding algorithm with the assistance of CRC bits for PAC codes. We also derive joint parity-check matrices of CRC-PAC codes suitable for iterative BP decoding. The proposed CRC-aided ABP (CA-ABP) decoding can effectively improve error performance when partial CRC bits are used in the decoding. Meanwhile, the error detection ability can still be guaranteed by the remaining CRC bits and adaptive decoding parameters. Moreover, compared with the conventional CRC-aided list (CA-List) decoding, our proposed scheme can significantly reduce computational complexity, to achieve a better trade-off between the performance and complexity for short PAC codes.  相似文献   

20.
A zero cross-correlation (ZCC) code is proposed to reduce the impact of system impairment and multiple access interference (MAI) in spectral amplitude coding optical code division multiple access (SAC-OCDMA) system. Bit-error-rate (BER) performance is derived taking into account the effect of some noises. The key to an effective OCDMA system is the choice of efficient address codes with good or almost zero correlation properties for encoding the source. The use of ZCC code can eradicate phase induced intensity noise (PIIN) which will contribute to better BER. Thus, we demonstrate, theoretically, the performance of optical ZCC code. It is shown that optical ZCC code can accommodate more users simultaneously for the typical error rate of optical communication system of 10−9. The result indicates that the established system not only preserves the capability of suppressing MAI, but also improves bit-error-rate performance as compared to the conventional coders.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号