首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An end-to-end joint source–channel (JSC) encoding matrix and a JSC decoding scheme using the proposed bit flipping check (BFC) algorithm and controversial variable node selection-based adaptive belief propagation (CVNS-ABP) decoding algorithm are presented to improve the efficiency and reliability of the joint source–channel coding (JSCC) scheme based on double Reed–Solomon (RS) codes. The constructed coding matrix can realize source compression and channel coding of multiple sets of information data simultaneously, which significantly improves the coding efficiency. The proposed BFC algorithm uses channel soft information to select and flip the unreliable bits and then uses the redundancy of the source block to realize the error verification and error correction. The proposed CVNS-ABP algorithm reduces the influence of error bits on decoding by selecting error variable nodes (VNs) from controversial VNs and adding them to the sparsity of the parity-check matrix. In addition, the proposed JSC decoding scheme based on the BFC algorithm and CVNS-ABP algorithm can realize the connection of source and channel to improve the performance of JSC decoding. Simulation results show that the proposed BFC-based hard-decision decoding (BFC-HDD) algorithm (ζ = 1) and BFC-based low-complexity chase (BFC-LCC) algorithm (ζ = 1, η = 3) can achieve about 0.23 dB and 0.46 dB of signal-to-noise ratio (SNR) defined gain over the prior-art decoding algorithm at a frame error rate (FER) = 101. Compared with the ABP algorithm, the proposed CVNS-ABP algorithm and BFC-CVNS-ABP algorithm achieve performance gains of 0.18 dB and 0.23 dB, respectively, at FER = 103.  相似文献   

2.
A novel hard decision decoding scheme based on a hybrid intelligent algorithm combining genetic algorithm and neural network, named as genetic neural-network decoding (GND), is proposed. GND offsets the reliability loss caused by channel transmission error and hard decision quantization by making full use of the genetic algorithm's optimization capacity and neural network's pattern classification function to optimize the hard decision outputs of received matched filter and restore a more likelihood codeword as the input of hard decision decoder. As can be seen from the theoretical analysis and computer simulation, GND scheme is close to the traditional soft decision decoding in error correction performance, while its complexity, compared with the traditional soft decision decoding, is greatly reduced because its decoding process does not need to use the channel statistical information.  相似文献   

3.
多输入多输出虽能显著增加信道容量,但译码复杂度与精度一直是亟待解决的核心问题之一。将现有的贝尔实验室分层空时码检测迫零算法与球形译码算法充分结合,重点考察无线多输入多输出信道基本特征即信道条件数与信噪比,提出了一种自适应的贝尔实验室分层空时码解码算法,在保证误比特率性能的条件下,降低了系统的译码复杂度;为无线通信的长期演进研究奠定了良好基础。  相似文献   

4.
水声信道时延扩展较长,频域选择性衰落严重,导致水声通信提升可靠性困难.同时,水声通信实际发送的信源中通常存在残留冗余,传统方法难以利用这部分冗余,从而导致一定的带宽浪费.针对该问题,提出了一种基于Polar码的水声通信信源信道联合译码方法.该方法根据Polar码的译码结构,以信源状态转移关系为基础构建信源信道联合译码网...  相似文献   

5.
王云江  白宝明  彭进业  王新梅 《物理学报》2011,60(3):30306-030306
本文针对X-Z型Pauli量子信道构建了一个量子稀疏图码的反馈式和积译码算法.相比较之前的基本和积算法,该反馈式译码策略利用了错误图样的比较, 稳定子中相关元素的值,特别的还根据信道的特征充分考虑了各变量所占错误的比重,并由此来调整信息节点的概率分布.该反馈式策略起到了经典译码中的软判决技术的作用,不但克服了对称简并问题带来的不利影响,更重要的是还给译码器提供了更多的有效信息,从而大大提高了译码器的纠错译码能力.另外,反馈式译码和积译码算法是基于GF(4)的,大大拓展了和积译码器关于量子译码 关键词: 量子稀疏图码 和积算法 量子纠错码 量子信息  相似文献   

6.
In this paper, a joint early stopping criterion based on cross entropy (CE), named joint CE criterion, is presented for double-protograph low-density parity-check (DP-LDPC) codes-based joint source-channel coding (JSCC) systems in images transmission to reduce the decoding complexity and decrease the decoding delay. The proposed early stopping criterion adopts the CE from the output likelihood ratios (LLRs) of the joint decoder. Moreover, a special phenomenon named asymmetry oscillation-like convergence (AOLC) in the changing process of CE is uncovered in the source decoder and channel decoder of this system meanwhile, and the proposed joint CE criterion can reduce the impact from the AOLC phenomenon. Comparing to the counterparts, the results show that the joint CE criterion can perform well in the decoding complexity and decoding latency in the low–moderate signal-to-noise ratio (SNR) region and achieve performance improvement in the high SNR region with appropriate parameters, which also demonstrates that this system with joint CE is a low-latency and low-power system.  相似文献   

7.
颜丹丹  范兴奎  陈祯羽  马鸿洋 《中国物理 B》2022,31(1):10304-010304
Quantum error-correction codes are immeasurable resources for quantum computing and quantum communication.However,the existing decoders are generally incapable of checking node duplication of belief propagation(BP)on quantum low-density parity check(QLDPC)codes.Based on the probability theory in the machine learning,mathematical statistics and topological structure,a GF(4)(the Galois field is abbreviated as GF)augmented model BP decoder with Tanner graph is designed.The problem of repeated check nodes can be solved by this decoder.In simulation,when the random perturbation strength p=0.0115-0.0116 and number of attempts N=60-70,the highest decoding efficiency of the augmented model BP decoder is obtained,and the low-loss frame error rate(FER)decreases to 7.1975×10-5.Hence,we design a novel augmented model decoder to compare the relationship between GF(2)and GF(4)for quantum code[[450,200]]on the depolarization channel.It can be verified that the proposed decoder provides the widely application range,and the decoding performance is better in QLDPC codes.  相似文献   

8.
We propose a novel variant of the gradient descent bit-flipping (GDBF) algorithm for decoding low-density parity-check (LDPC) codes over the binary symmetric channel. The new bit-flipping rule is based on the reliability information passed from neighboring nodes in the corresponding Tanner graph. The name SuspicionDistillation reflects the main feature of the algorithm—that in every iteration, we assign a level of suspicion to each variable node about its current bit value. The level of suspicion of a variable node is used to decide whether the corresponding bit will be flipped. In addition, in each iteration, we determine the number of satisfied and unsatisfied checks that connect a suspicious node with other suspicious variable nodes. In this way, in the course of iteration, we “distill” such suspicious bits and flip them. The deterministic nature of the proposed algorithm results in a low-complexity implementation, as the bit-flipping rule can be obtained by modifying the original GDBF rule by using basic logic gates, and the modification is not applied in all decoding iterations. Furthermore, we present a more general framework based on deterministic re-initialization of the decoder input. The performance of the resulting algorithm is analyzed for the codes with various code lengths, and significant performance improvements are observed compared to the state-of-the-art hard-decision-decoding algorithms.  相似文献   

9.
Beyond-5G wireless networks are expected to gain a excellent trade-off among computational accuracy, latency, and efficient use of available resources. This poses a significant challenge to the channel decoder. In this paper, a novel memory efficient algorithm for decoding Low-Density Parity-Check (LDPC) codes is proposed with a view to reduce the implementation complexity and hardware resources. The algorithm, called Check Node Self-Update (CNSU) algorithm, is based on layered normalized min-sum (LNMS) decoding algorithm while utilizing iteration parallel techniques to integrate both Variable Nodes (VNs) message and A-Posterior Probability(APP) message into the Check Nodes (CNs) message, which eliminates memories of both the VNs message and the APP message as well as updating module of APP message in CNs unit. Based on the proposed CNSU algorithm, design of partially parallel decoder architecture and serial simulations followed by implementation on the Stratix II EP2S180 FPGA are presented. The results show that the proposed algorithm and implementation bring a significant gain in efficient using of available resources, include reducing hardware memory resources and chip area while keeping the benefit of bit-error-rate (BER) performance and speeding up of convergence with LNMS, which are beneficial to apply in Beyond-5G wireless networks.  相似文献   

10.
Belief propagation (BP) decoding for polar codes has been extensively studied because of its inherent parallelism. However, its performance remains inferior to that of successive cancellation list decoding (SCL) due to the structure of the decoding graph. To improve the block error rate (BLER) performance, the BP correction (BPC) decoding, a post-processing scheme that corrects prior knowledge of the identified code bit, improves convergence by executing additional iterations on the failed BP decoder. Moreover, the BPC decoder demonstrates a better decoding performance than the BP-based bit-flipping decoder. Nevertheless, the additional decoding attempts lead to increased latency. In this article, a modified BPC decoder is proposed to reduce the number of decoding attempts by redefining the correction rules. A new metric is designed to effectively identify the corrected location. Numerical results show that the proposed modified BPC decoder achieves a slight improvement in BLER compared with the original BPC, with a dramatic reduction in average complexity. Furthermore, a higher-order version, named MBPC-Ω, is extended to further improve the performance, where the Ω is the maximum correction order. Numerical results show that the higher-order modified BPC achieves a similar BLER performance to existing multiple bit-flipping BP decoders but has around half the latency overhead. In addition, the proposed MBPC-2 decoder performs better than the cyclic redundancy check-aided SCL (CA-SCL) decoder with list size 4 and is slightly worse than the CA-SCL with list size 8 in high signal-to-noise ratio (SNR) regions but with significant decoding latency reduction.  相似文献   

11.
王云江  白宝明  王新梅 《物理学报》2010,59(11):7591-7595
量子稀疏图码的译码可以由基于错误图样的和积译码算法来实现.本文在此基础上构建了一个新的反馈式迭代译码算法.其反馈策略不仅仅重新利用了错误图样,而且还利用了稳定子上相应元素的值和信道的错误模型.由此,本方法一方面可以克服传统的量子和积译码算法中遇到的所谓对称简并错误,另一方面还能反馈更多的有用信息到译码器中,帮助其产生有效的译码结果,大大提高译码器的译码能力.另外,本算法并没有增加量子测量的复杂度,而是对测量中所能获得的信息的更充分利用.  相似文献   

12.
This paper deals with the specific construction of binary low-density parity-check (LDPC) codes. We derive lower bounds on the error exponents for these codes transmitted over the memoryless binary symmetric channel (BSC) for both the well-known maximum-likelihood (ML) and proposed low-complexity decoding algorithms. We prove the existence of such LDPC codes that the probability of erroneous decoding decreases exponentially with the growth of the code length while keeping coding rates below the corresponding channel capacity. We also show that an obtained error exponent lower bound under ML decoding almost coincide with the error exponents of good linear codes.  相似文献   

13.
A traditional successive cancellation (SC) decoding algorithm produces error propagation in the decoding process. In order to improve the SC decoding performance, it is important to solve the error propagation. In this paper, we propose a new algorithm combining reinforcement learning and SC flip (SCF) decoding of polar codes, which is called a Q-learning-assisted SCF (QLSCF) decoding algorithm. The proposed QLSCF decoding algorithm uses reinforcement learning technology to select candidate bits for the SC flipping decoding. We establish a reinforcement learning model for selecting candidate bits, and the agent selects candidate bits to decode the information sequence. In our scheme, the decoding delay caused by the metric ordering can be removed during the decoding process. Simulation results demonstrate that the decoding delay of the proposed algorithm is reduced compared with the SCF decoding algorithm, based on critical set without loss of performance.  相似文献   

14.
Bit-interleaved coded modulation (BICM) has attracted considerable attention from the research community in the past three decades, because it can achieve desirable error performance with relatively low implementation complexity for a large number of communication and storage systems. By exploiting the iterative demapping and decoding (ID), the BICM is able to approach capacity limits of coded modulation over various channels. In recent years, protograph low-density parity-check (PLDPC) codes and their spatially-coupled (SC) variants have emerged to be a pragmatic forward-error-correction (FEC) solution for BICM systems due to their tremendous error-correction capability and simple structures, and found widespread applications such as deep-space communication, satellite communication, wireless communication, optical communication, and data storage. This article offers a comprehensive survey on the state-of-the-art development of PLDPC-BICM and its innovative SC variants over a variety of channel models, e.g., additive white Gaussian noise (AWGN) channels, fading channels, Poisson pulse position modulation (PPM) channels, and flash-memory channels. Of particular interest is code construction, constellation shaping, as well as bit-mapper design, where the receiver is formulated as a serially-concatenated decoding framework consisting of a soft-decision demapper and a belief-propagation decoder. Finally, several promising research directions are discussed, which have not been adequately addressed in the current literature.  相似文献   

15.
Algebraic soft-decision Reed–Solomon (RS) decoding algorithms with improved error-correcting capability and comparable complexity to standard algebraic hard-decision algorithms could be very attractive for possible implementation in the next generation of read channels. In this work, we investigate the performance of a low-complexity Chase (LCC)-type soft-decision RS decoding algorithm, recently proposed by Bellorado and Kav?i?, on perpendicular magnetic recording channels for sector-long RS codes of practical interest. Previous results for additive white Gaussian noise channels have shown that for a moderately long high-rate code, the LCC algorithm can achieve a coding gain comparable to the Koetter–Vardy algorithm with much lower complexity. We present a set of numerical results that show that this algorithm provides small coding gains, on the order of a fraction of a dB, with similar complexity to the hard-decision algorithms currently used, and that larger coding gains can be obtained if we use more test patterns, which significantly increases its computational complexity.  相似文献   

16.
田玉静  左红伟  王超 《应用声学》2020,39(6):932-939
语音通信系统中,语音通过信道传输将不可避免地引入码间串扰和信号畸变,同时受到噪声污染。本文在分析自适应盲均衡算法CMA(constant modulus algorithm)和改进盲均衡算法的基础上,考虑到自适应盲均衡技术在语音噪声控制方面能力有限,将自适应盲均衡技术与小波包掩蔽阈值降噪算法联合使用,形成一种基带语音增强新方法。仿真试验结果显示自适应盲均衡技术可以使星座图变得清晰而紧凑,有效减小误码率。研究证实该方法在语音信号ISI和畸变严重情况下,在白噪及有色噪声不同的噪声环境中都具有稳定的降噪能力,消噪同时可获得汉语普通话良好的听觉效果。  相似文献   

17.
张茜  刘光斌  余志勇  郭金库 《物理学报》2015,64(1):18404-018404
该文研究了冗余中继, 次用户及中继用户数目, 检测门限, 信道传输错误率等因素对中继协作频谱感知系统性能的影响, 并提出一种新的自适应全局最优化算法.该算法基于获得最大无干扰功率的自适应中继选择方法, 确定备选认知中继集合;单个次用户以信道传输错误率最小为准则, 从备选认知中继集合中自适应选择最佳中继, 使总体检测率最大;在给定目标检测率的条件下, 以系统吞吐量最大为准则, 给出了自适应全局最优化算法.仿真实验结果表明新算法信道传输精度高, 信道吞吐量大, 节约带宽资源.  相似文献   

18.
A distributed arithmetic coding algorithm based on source symbol purging and using the context model is proposed to solve the asymmetric Slepian–Wolf problem. The proposed scheme is to make better use of both the correlation between adjacent symbols in the source sequence and the correlation between the corresponding symbols of the source and the side information sequences to improve the coding performance of the source. Since the encoder purges a part of symbols from the source sequence, a shorter codeword length can be obtained. Those purged symbols are still used as the context of the subsequent symbols to be encoded. An improved calculation method for the posterior probability is also proposed based on the purging feature, such that the decoder can utilize the correlation within the source sequence to improve the decoding performance. In addition, this scheme achieves better error performance at the decoder by adding a forbidden symbol in the encoding process. The simulation results show that the encoding complexity and the minimum code rate required for lossless decoding are lower than that of the traditional distributed arithmetic coding. When the internal correlation strength of the source is strong, compared with other DSC schemes, the proposed scheme exhibits a better decoding performance under the same code rate.  相似文献   

19.
吉喆  贾大功  张红霞  张德龙  刘铁根  张以谟 《物理学报》2015,64(3):34218-034218
光码分多址系统中, 光编解码器是影响系统性能的关键因素之一.自相关峰值旁瓣比(P/W)、自互相关峰值比(P/C)是衡量编解码器性能的两个重要指标.以硅基SOI微环谐振腔为载体, 提出了一种串联三环阵列的二维相干OCDMA编解码器模型.详细研究了耦合系数、损耗系数、阵列间距以及通道间隔对微环谐振腔编解码器性能的影响.结果表明, 半径为50 μm的微环, 环与直波导间耦合系数在0.6–0.7之间, 环与环间耦合系数在0.1–0.2之间, 损耗系数 < 2 dB/cm, 阵列间距大于3 mm, 通道间隔在25–36 GHz间时, 编解码器能够获得良好的性能.  相似文献   

20.
Typical random codes (TRCs) in a communication scenario of source coding with side information in the decoder is the main subject of this work. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. In this code ensemble, the relatively small type classes of the source are deterministically partitioned into the available bins in a one-to-one manner. As a consequence, the error probability decreases dramatically. The random binning error exponent and the error exponent of the TRCs are derived and proved to be equal to one another in a few important special cases. We show that the performance under optimal decoding can be attained also by certain universal decoders, e.g., the stochastic likelihood decoder with an empirical entropy metric. Moreover, we discuss the trade-offs between the error exponent and the excess-rate exponent for the typical random semi-deterministic code and characterize its optimal rate function. We show that for any pair of correlated information sources, both error and excess-rate probabilities exponential vanish when the blocklength tends to infinity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号