首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到9条相似文献,搜索用时 15 毫秒
1.
Belief propagation (BP) decoding for polar codes has been extensively studied because of its inherent parallelism. However, its performance remains inferior to that of successive cancellation list decoding (SCL) due to the structure of the decoding graph. To improve the block error rate (BLER) performance, the BP correction (BPC) decoding, a post-processing scheme that corrects prior knowledge of the identified code bit, improves convergence by executing additional iterations on the failed BP decoder. Moreover, the BPC decoder demonstrates a better decoding performance than the BP-based bit-flipping decoder. Nevertheless, the additional decoding attempts lead to increased latency. In this article, a modified BPC decoder is proposed to reduce the number of decoding attempts by redefining the correction rules. A new metric is designed to effectively identify the corrected location. Numerical results show that the proposed modified BPC decoder achieves a slight improvement in BLER compared with the original BPC, with a dramatic reduction in average complexity. Furthermore, a higher-order version, named MBPC-Ω, is extended to further improve the performance, where the Ω is the maximum correction order. Numerical results show that the higher-order modified BPC achieves a similar BLER performance to existing multiple bit-flipping BP decoders but has around half the latency overhead. In addition, the proposed MBPC-2 decoder performs better than the cyclic redundancy check-aided SCL (CA-SCL) decoder with list size 4 and is slightly worse than the CA-SCL with list size 8 in high signal-to-noise ratio (SNR) regions but with significant decoding latency reduction.  相似文献   

2.
Polar codes are a relatively new family of linear block codes which have garnered a lot of attention from the scientific community, owing to their low-complexity implementation and provably capacity achieving capability. They have been proposed to be used for encoding information on the control channels in 5G wireless networks due to their robustness for short codeword lengths. The basic approach introduced by Arikan can only be used to generate polar codes of length N=2n, nN. To overcome this limitation, polarization kernels of size larger than 2×2 (like 3×3, 4×4, and so on), have already been proposed in the literature. Additionally, kernels of different sizes can also be combined together to generate multi-kernel polar codes, further improving the flexibility of codeword lengths. These techniques undoubtedly improve the usability of polar codes for various practical implementations. However, with the availability of so many design options and parameters, designing polar codes that are optimally tuned to specific underlying system requirements becomes extremely challenging, since a variation in system parameters can result in a different choice of polarization kernel. This necessitates a structured design technique for optimal polarization circuits. We developed the DTS-parameter to quantify the best rate-matched polar codes. Thereafter, we developed and formalized a recursive technique to design polarization kernels of higher order from component smaller order. A scaled version of the DTS-parameter, namely SDTS-parameter (denoted by the symbol ζ in this article) was used for the analytical assessment of this construction technique and validated for single-kernel polar codes. In this paper, we aim to extend the analysis of the aforementioned SDTS parameter for multi-kernel polar codes and validate their applicability in this domain as well.  相似文献   

3.
The Belief Propagation (BP) algorithm has the advantages of high-speed decoding and low latency. To improve the block error rate (BLER) performance of the BP-based algorithm, the BP flipping algorithm was proposed. However, the BP flipping algorithm attempts numerous useless flippings for improving the BLER performance. To reduce the number of decoding attempts needed without any loss of BLER performance, in this paper a metric is presented to evaluate the likelihood that the bits would correct the BP flipping decoding. Based on this, a BP-Step-Flipping (BPSF) algorithm is proposed which only traces the unreliable bits in the flip set (FS) to flip and skips over the reliable ones. In addition, a threshold β is applied when the magnitude of the log–likelihood ratio (LLR) is small, and an enhanced BPSF (EBPSF) algorithm is presented to lower the BLER. With the same FS, the proposed algorithm can reduce the average number of iterations efficiently. Numerical results show the average number of iterations for EBPSF-1 decreases by 77.5% when N = 256, compared with the BP bit-flip-1 (BPF-1) algorithm at Eb/N0 = 1.5 dB.  相似文献   

4.
A traditional successive cancellation (SC) decoding algorithm produces error propagation in the decoding process. In order to improve the SC decoding performance, it is important to solve the error propagation. In this paper, we propose a new algorithm combining reinforcement learning and SC flip (SCF) decoding of polar codes, which is called a Q-learning-assisted SCF (QLSCF) decoding algorithm. The proposed QLSCF decoding algorithm uses reinforcement learning technology to select candidate bits for the SC flipping decoding. We establish a reinforcement learning model for selecting candidate bits, and the agent selects candidate bits to decode the information sequence. In our scheme, the decoding delay caused by the metric ordering can be removed during the decoding process. Simulation results demonstrate that the decoding delay of the proposed algorithm is reduced compared with the SCF decoding algorithm, based on critical set without loss of performance.  相似文献   

5.
In the successive cancellation (SC) list decoding, the tree pruning operation retains the L best paths with respect to a metric at every decoding step. However, the correct path might be among the L worst paths due to the imposed penalties. In this case, the correct path is pruned and the decoding process fails. shifted pruning (SP) scheme can recover the correct path by additional decoding attempts when decoding fails, in which the pruning window is shifted by κL paths over certain bit positions. A special case of the shifted pruning scheme where κ=L is known as SCL-flip decoding, which was independently proposed in 2019. In this work, a new metric that performs better in particular for medium and long codes is proposed, and nested shift-pruning schemes are suggested for improving the average complexity.  相似文献   

6.
Polar coding gives rise to the first explicit family of codes that provably achieve capacity with efficient encoding and decoding for a wide range of channels. However, its performance at short blocklengths under standard successive cancellation decoding is far from optimal. A well-known way to improve the performance of polar codes at short blocklengths is CRC precoding followed by successive-cancellation list decoding. This approach, along with various refinements thereof, has largely remained the state of the art in polar coding since it was introduced in 2011. Recently, Arıkan presented a new polar coding scheme, which he called polarization-adjusted convolutional (PAC) codes. At short blocklengths, such codes offer a dramatic improvement in performance as compared to CRC-aided list decoding of conventional polar codes. PAC codes are based primarily upon the following main ideas: replacing CRC codes with convolutional precoding (under appropriate rate profiling) and replacing list decoding by sequential decoding. One of our primary goals in this paper is to answer the following question: is sequential decoding essential for the superior performance of PAC codes? We show that similar performance can be achieved using list decoding when the list size L is moderately large (say, L128). List decoding has distinct advantages over sequential decoding in certain scenarios, such as low-SNR regimes or situations where the worst-case complexity/latency is the primary constraint. Another objective is to provide some insights into the remarkable performance of PAC codes. We first observe that both sequential decoding and list decoding of PAC codes closely match ML decoding thereof. We then estimate the number of low weight codewords in PAC codes, and use these estimates to approximate the union bound on their performance. These results indicate that PAC codes are superior to both polar codes and Reed–Muller codes. We also consider random time-varying convolutional precoding for PAC codes, and observe that this scheme achieves the same superior performance with constraint length as low as ν=2.  相似文献   

7.
There exists a natural trade-off in public key encryption (PKE) schemes based on ring learning with errors (RLWE), namely: we would like a wider error distribution to increase the security, but it comes at the cost of an increased decryption failure rate (DFR). A straightforward solution to this problem is the error-correcting code, which is commonly used in communication systems and already appears in some RLWE-based proposals. However, applying error-correcting codes to those cryptographic schemes is far from simply installing an add-on. Firstly, the residue error term derived by decryption has correlated coefficients, whereas most prevalent error-correcting codes with remarkable error tolerance assume the channel noise to be independent and memoryless. This explains why only simple error-correcting methods are used in existing RLWE-based PKE schemes. Secondly, the residue error term has correlated coefficients leaving accurate DFR estimation challenging even for uncoded plaintext. It can be found in the literature that a tighter DFR estimation can effectively create a DFR margin. Thirdly, most error-correcting codes are not well designed for safety considerations, e.g., syndrome decoding has a nonconstant time nature. A code good at error correcting might be weak under a variety of attacks. In this work, we propose a polar coding scheme for RLWE-based PKE. A relaxed “independence” assumption is used to derive an uncorrelated residue noise term, and a wireless communication strategy, outage, is used to construct polar codes. Furthermore, some knowledge about the residue noise is exploited to improve the decoding performance. With the parameterization of NewHope Round 2, the proposed scheme creates a considerable DRF margin, which gives a competitive security improvement compared to state-of-the-art benchmarks. Specifically, the security is improved by 28.8%, while a DFR of 2149 is achieved a for code rate pf 0.25, n=1024,q= 12,289, and binomial parameter k=55. Moreover, polar encoding and decoding have a quasilinear complexity O(Nlog2N) and intrinsically support constant-time implementations.  相似文献   

8.
An end-to-end joint source–channel (JSC) encoding matrix and a JSC decoding scheme using the proposed bit flipping check (BFC) algorithm and controversial variable node selection-based adaptive belief propagation (CVNS-ABP) decoding algorithm are presented to improve the efficiency and reliability of the joint source–channel coding (JSCC) scheme based on double Reed–Solomon (RS) codes. The constructed coding matrix can realize source compression and channel coding of multiple sets of information data simultaneously, which significantly improves the coding efficiency. The proposed BFC algorithm uses channel soft information to select and flip the unreliable bits and then uses the redundancy of the source block to realize the error verification and error correction. The proposed CVNS-ABP algorithm reduces the influence of error bits on decoding by selecting error variable nodes (VNs) from controversial VNs and adding them to the sparsity of the parity-check matrix. In addition, the proposed JSC decoding scheme based on the BFC algorithm and CVNS-ABP algorithm can realize the connection of source and channel to improve the performance of JSC decoding. Simulation results show that the proposed BFC-based hard-decision decoding (BFC-HDD) algorithm (ζ = 1) and BFC-based low-complexity chase (BFC-LCC) algorithm (ζ = 1, η = 3) can achieve about 0.23 dB and 0.46 dB of signal-to-noise ratio (SNR) defined gain over the prior-art decoding algorithm at a frame error rate (FER) = 101. Compared with the ABP algorithm, the proposed CVNS-ABP algorithm and BFC-CVNS-ABP algorithm achieve performance gains of 0.18 dB and 0.23 dB, respectively, at FER = 103.  相似文献   

9.
Blaum–Roth Codes are binary maximum distance separable (MDS) array codes over the binary quotient ring F2[x]/(Mp(x)), where Mp(x)=1+x++xp1, and p is a prime number. Two existing all-erasure decoding methods for Blaum–Roth codes are the syndrome-based decoding method and the interpolation-based decoding method. In this paper, we propose a modified syndrome-based decoding method and a modified interpolation-based decoding method that have lower decoding complexity than the syndrome-based decoding method and the interpolation-based decoding method, respectively. Moreover, we present a fast decoding method for Blaum–Roth codes based on the LU decomposition of the Vandermonde matrix that has a lower decoding complexity than the two modified decoding methods for most of the parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号