首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In our previous paper [1], a novel CPCD technique has been introduced to significantly improve decoding of LDPC codes over the known sum product algorithm (SPA) decoding. However, results presented in [1] were limited to shorter low density parity check (LDPC) codes and transmission over an addition white Gaussian noise (AWGN) channels using QPSK modulation. In this study, we demonstrate that CPCD can achieve significant gains regardless of the length of the code or the modulation technique used for transmission or the type of the channel including fading channels. In addition, a novel turbo-CPCD technique that follows the principle of turbo LDPC is introduced. It is shown here that CPCD and turbo-CPCD can perform about 0.21.5 dB better than SPA decoding and turbo LDPC codes.  相似文献   

2.
Accurate estimation of channel log-likelihood ratio (LLR) is crucial to the decoding of modern channel codes like turbo, low-density parity-check (LDPC), and polar codes. Under an additive white Gaussian noise (AWGN) channel, the calculation of LLR is relatively straightforward since the closed-form expression for the channel likelihood function can be perfectly known to the receiver. However, it would be much more complicated for heterogeneous networks where the global noise (i.e., noise plus interference) may be dominated by non-Gaussian interference with an unknown distribution. Although the LLR can still be calculated by approximating the distribution of global noise as Gaussian, it will cause performance loss due to the non-Gaussian nature of global noise. To address this problem, we propose to use bi-Gaussian (BG) distribution to approximate the unknown distribution of global noise, for which the two parameters of BG distribution can easily be estimated from the second and fourth moments of the overall received signals without any knowledge of interfering channel state information (CSI) or signaling format information. Simulation results indicate that the proposed BG approximation can effectively improve the word error rate (WER) performance. The gain of BG approximation over Gaussian approximation depends heavily on the interference structure. For the scenario of a single BSPK interferer with a 5 dB interference-to-noise ratio (INR), we observed a gain of about 0.6 dB. The improved LLR estimation can also accelerate the convergence of iterative decoding, thus involving a lower overall decoding complexity. In general, the overall decoding complexity can be reduced by 25 to 50%.  相似文献   

3.
In this paper, performances of turbo codes for 10-66GHz WiMax system are analyzed and simulated. The channel of WiMax system is modeled as Rician channel due to the short wavelength. The uniform interleaver is used in the performance analysis to derive the average upper bound of performance of turbo codes. Simulations of bit error rate (BER) performances are performed for WiMax systems with/without turbo codes. It is shown that about 4.3dB coding gain can be achieved by using a [1,11/13,15/13] turbo code with 5 iterations, and thus the required transmission power of WiMax system can be decreased. It is also demonstrated that the performances of turbo codes are improved by increasing the interleaver length and the iteration number.  相似文献   

4.
量子Turbo乘积码   总被引:1,自引:0,他引:1       下载免费PDF全文
肖海林  欧阳缮  谢武 《物理学报》2011,60(2):20301-020301
量子通信是经典通信和量子力学相结合的一门新兴交叉学科.量子纠错编码是实现量子通信的关键技术之一.构造量子纠错编码的主要方法是借鉴经典纠错编码技术,许多经典的编码技术在量子领域中都可以找到其对应的编码方法.针对经典纠错码中最好码之一的Turbo乘积码,提出一种以新构造的CSS型量子卷积码为稳定子码的量子Turbo乘积码.首先,运用群的理论及稳定子码的基本原理构造出新的CSS型量子卷积码稳定子码生成元,并描述了其编码网络.接着,利用量子置换SWAP门定义推导出量子Turbo乘积码的交织编码矩阵.最后,推导出量子Turbo乘积码的译码迹距离与经典Turbo乘积码的译码距离的对应关系,并提出量子Turbo乘积码的编译码实现方案.这种编译码方法具有高度结构化,设计思路简单,网络易于实施的特点. 关键词: CSS码 量子卷积码 量子Turbo乘积码 量子纠错编码  相似文献   

5.
Polar coding gives rise to the first explicit family of codes that provably achieve capacity with efficient encoding and decoding for a wide range of channels. However, its performance at short blocklengths under standard successive cancellation decoding is far from optimal. A well-known way to improve the performance of polar codes at short blocklengths is CRC precoding followed by successive-cancellation list decoding. This approach, along with various refinements thereof, has largely remained the state of the art in polar coding since it was introduced in 2011. Recently, Arıkan presented a new polar coding scheme, which he called polarization-adjusted convolutional (PAC) codes. At short blocklengths, such codes offer a dramatic improvement in performance as compared to CRC-aided list decoding of conventional polar codes. PAC codes are based primarily upon the following main ideas: replacing CRC codes with convolutional precoding (under appropriate rate profiling) and replacing list decoding by sequential decoding. One of our primary goals in this paper is to answer the following question: is sequential decoding essential for the superior performance of PAC codes? We show that similar performance can be achieved using list decoding when the list size L is moderately large (say, L128). List decoding has distinct advantages over sequential decoding in certain scenarios, such as low-SNR regimes or situations where the worst-case complexity/latency is the primary constraint. Another objective is to provide some insights into the remarkable performance of PAC codes. We first observe that both sequential decoding and list decoding of PAC codes closely match ML decoding thereof. We then estimate the number of low weight codewords in PAC codes, and use these estimates to approximate the union bound on their performance. These results indicate that PAC codes are superior to both polar codes and Reed–Muller codes. We also consider random time-varying convolutional precoding for PAC codes, and observe that this scheme achieves the same superior performance with constraint length as low as ν=2.  相似文献   

6.
Quantum error correcting codes (QECCs) play an important role in preventing quantum information decoherence. Good quantum stabilizer codes were constructed by classical error correcting codes. In this paper, Bose–Chaudhuri–Hocquenghem (BCH) codes over finite fields are used to construct quantum codes. First, we try to find such classical BCH codes, which contain their dual codes, by studying the suitable cyclotomic cosets. Then, we construct nonbinary quantum BCH codes with given parameter sets. Finally, a new family of quantum BCH codes can be realized by Steane’s enlargement of nonbinary Calderbank-Shor-Steane (CSS) construction and Hermitian construction. We have proven that the cyclotomic cosets are good tools to study quantum BCH codes. The defining sets contain the highest numbers of consecutive integers. Compared with the results in the references, the new quantum BCH codes have better code parameters without restrictions and better lower bounds on minimum distances. What is more, the new quantum codes can be constructed over any finite fields, which enlarges the range of quantum BCH codes.  相似文献   

7.
We present a universal framework for quantum error-correcting codes, i.e., a framework that applies to the most general quantum error-correcting codes. This framework is based on the group algebra, an algebraic notation associated with nice error bases of quantum systems. The nicest thing about this framework is that we can characterize the properties of quantum codes by the properties of the group algebra. We show how it characterizes the properties of quantum codes as well as generates some new results about quantum codes.  相似文献   

8.
This paper deals with the specific construction of binary low-density parity-check (LDPC) codes. We derive lower bounds on the error exponents for these codes transmitted over the memoryless binary symmetric channel (BSC) for both the well-known maximum-likelihood (ML) and proposed low-complexity decoding algorithms. We prove the existence of such LDPC codes that the probability of erroneous decoding decreases exponentially with the growth of the code length while keeping coding rates below the corresponding channel capacity. We also show that an obtained error exponent lower bound under ML decoding almost coincide with the error exponents of good linear codes.  相似文献   

9.
周茜  李亮  陈增强  赵加祥 《中国物理 B》2008,17(10):3609-3615
Fountain codes provide an efficient way to transfer information over erasure channels like the Internet. LT codes are the first codes fully realizing the digital fountain concept. They are asymptotically optimal rateless erasure codes with highly efficient encoding and decoding algorithms. In theory, for each encoding symbol of LT codes, its degree is randomly chosen according to a predetermined degree distribution, and its neighbours used to generate that encoding symbol are chosen uniformly at random. Practical implementation of LT codes usually realizes the randomness through pseudo-randomness number generator like linear congruential method. This paper applies the pseudo-randomness of chaotic sequence in the implementation of LT codes. Two Kent chaotic maps are used to determine the degree and neighbour(s) of each encoding symbol. It is shown that the implemented LT codes based on chaos perform better than the LT codes implemented by the traditional pseudo-randomness number generator.  相似文献   

10.
Ji-Hao Fan 《中国物理 B》2021,30(12):120302-120302
In most practical quantum mechanical systems, quantum noise due to decoherence is highly biased towards dephasing. The quantum state suffers from phase flip noise much more seriously than from the bit flip noise. In this work, we construct new families of asymmetric quantum concatenated codes (AQCCs) to deal with such biased quantum noise. Our construction is based on a novel concatenation scheme for constructing AQCCs with large asymmetries, in which classical tensor product codes and concatenated codes are utilized to correct phase flip noise and bit flip noise, respectively. We generalize the original concatenation scheme to a more general case for better correcting degenerate errors. Moreover, we focus on constructing nonbinary AQCCs that are highly degenerate. Compared to previous literatures, AQCCs constructed in this paper show much better parameter performance than existed ones. Furthermore, we design the specific encoding circuit of the AQCCs. It is shown that our codes can be encoded more efficiently than standard quantum codes.  相似文献   

11.
Vishav Jyoti 《Optik》2011,122(10):843-850
In this paper, the design, implementation and performance analysis of various one dimensional codes in an OCDMA system for different data formats is presented. A number of different codes are used with optical CDMA to improve its error performance. Here, three such codes, optical orthogonal codes (OOC), Walsh Hadamard codes and zero cross-correlation (ZCC) codes have been compared using different data formats, NRZ raised cosine, NRZ rectangular, RZ raised cosine and RZ rectangular. It is found that NRZ raised cosine has the best system performance for all the codes used. After that, the three codes have been compared in terms of the BER, eye diagrams and received optical power using NRZ raised cosine modulation format. It is analyzed that ZCC codes have zero cross-correlation property. The simulation results revealed that ZCC codes can provide a better BER compared to the OOC and Walsh Hadamard codes and it is most suitable to be employed in the OCDMA systems.  相似文献   

12.
Utilizing fountain codes to control the peak-to-average power ratio (PAPR) is a classic scheme in Orthogonal Frequency Division Multiplexing (OFDM) wireless communication systems. However, because the robust soliton distribution (RSD) produces large-degree values, the decoding performance is severely reduced. In this paper, we design statistical degree distribution (SD) under a scenario that utilizes fountain codes to control the PAPR. The probability of the PAPR produced is combined with RSD to design PRSD, which enhances the smaller degree value produced. Subsequently, a particle swarm optimization (PSO) algorithm is used to search the optimal degree value between the binary exponential distribution (BED) and PRSD distribution according to the minimum average degree principle. Simulation results demonstrate that the proposed method outperforms other relevant degree distributions in the same controlled PAPR threshold, and the average degree value and decoding efficiency are remarkably improved.  相似文献   

13.
An efficient and practical post-processing technique based on reverse reconciliation for continuous variable quantum key distribution is proposed and simulated with low-density parity check (LDPC) codes. MultiLevel Coding/ MultiStage Decoding, which fully utilizes optimization technique such as vector quantization and iterative decoding and the optimal channel coding most close to the Shannon limit, was used to realize efficient reverse reconciliation algorithm. Simulation results showed that the proposed m...  相似文献   

14.
单向量子密钥纠错协议的纠错性能仿真分析   总被引:1,自引:0,他引:1       下载免费PDF全文
赵峰 《物理学报》2013,62(20):200303-200303
高效误码纠错是量子密钥分配后续数据处理的关键技术之一.基于汉明码校验子级联单向一次通信纠错方案, 分别对三种校验子级联纠错能力进行了理论和仿真分析.根据分析结果提出了一种基于混合校验子级联纠错协议, 通过优化纠错流程相关参数提高密钥生成效率.随后对该协议的纠错能力及其密钥生成效率进行了仿真分析, 最后根据误码率后验分布参数, 对密钥最终误码率及其置信区间进行了估计.单一校验子级联纠错仿真结果显示:在相同的纠错能力的条件下, 初始误码率为3% < p ≤ 11% 时, (7, 4)汉明码纠错的密钥生成效率最高;初始误码率为1.5% < p ≤ 3.0% 时, (15, 11)汉明码纠错的密钥生成效率最高;初始误码率为p ≤ 1.5% 时, (31, 26) 汉明码纠错的密钥生成效率最高.混合校验子级联纠错方案的仿真结果显示:对于初始误码率为9.50%, 经过8轮次混合校验子级联纠错, 密钥生成效率为9.94%, 误码率期望值为5.21×10-12, 置信度为90%的上限值为2.85×10-11, 相比用单一(7, 4)校验子级联纠错的密钥生成效率提高了约3倍. 关键词: 量子密钥分配 保密纠错 效率分析  相似文献   

15.
In this paper, the theoretical lower-bound on the success probability of blind reconstruction of Bose–Chaudhuri–Hocquenghem (BCH) codes is derived. In particular, the blind reconstruction method of BCH codes based on the consecutive roots of generator polynomials is mainly analyzed because this method shows the best blind reconstruction performance. In order to derive a performance lower-bound, the theoretical analysis of BCH codes on the aspects of blind reconstruction is performed. Furthermore, the analysis results can be applied not only to the binary BCH codes but also to the non-binary BCH codes including Reed–Solomon (RS) codes. By comparing the derived lower-bound with the simulation results, it is confirmed that the success probability of the blind reconstruction of BCH codes based on the consecutive roots of generator polynomials is well bounded by the proposed lower-bound.  相似文献   

16.
In this paper, we investigate the joint design of channel and network coding in bi-directional relaying systems and propose a combined low complexity physical network coding and LDPC decoding scheme. For the same LDPC codes employed at both source nodes, we show that the relay can decode the network coded codewords from the superimposed signal received from the BPSK-modulated multiple-access channel. Simulation results show that this novel joint physical network coding and LDPC decoding method outperforms the existing MMSE network coding and LDPC decoding method over the AWGN and complex MAC channel.  相似文献   

17.
For high-dimensional data such as images, learning an encoder that can output a compact yet informative representation is a key task on its own, in addition to facilitating subsequent processing of data. We present a model that produces discrete infomax codes (DIMCO); we train a probabilistic encoder that yields k-way d-dimensional codes associated with input data. Our model maximizes the mutual information between codes and ground-truth class labels, with a regularization which encourages entries of a codeword to be statistically independent. In this context, we show that the infomax principle also justifies existing loss functions, such as cross-entropy as its special cases. Our analysis also shows that using shorter codes reduces overfitting in the context of few-shot classification, and our various experiments show this implicit task-level regularization effect of DIMCO. Furthermore, we show that the codes learned by DIMCO are efficient in terms of both memory and retrieval time compared to prior methods.  相似文献   

18.
Kim BH  Kim GD  Song TK 《Ultrasonics》2007,46(2):148-154
The compression error of post-compression based coded excitation techniques increases with decreasing f-number, which causes the elevation of side-lobe levels. In this paper, a post-compression based coded excitation technique with reduced compression errors through dynamic aperture control is proposed. To improve the near-field resolution with no frame rate reduction, the proposed method performs simultaneous transmit multi-zone focusing using two mutually orthogonal complementary Golay codes. In the proposed method, the two mutually orthogonal sequences of length 16 are simultaneously transmitted toward two different focal depths, which are separately compressed into two short pulses on receive after dynamic focusing is performed. After carrying out the same transmit-receive operation for the same scan line with the complementary set of the orthogonal Golay codes, a single scan line with two transmit foci is obtained.The computer simulation results using a linear array with a center frequency of 7.5 MHz and 60% 6 dB bandwidth show that the range side-lobe level can be suppressed below −50 dB, when f-number is maintained not smaller than 3. The performance of the proposed scheme for a smaller f-number of 2 was also verified through actual experiments using a 3.85 MHz curved linear array with 60% 6 dB bandwidth. Both the simulation and experimental results show that the proposed method provides improved lateral resolution compared to the conventional pre-compressed and post-compression based coded excitation imaging using Golay codes.  相似文献   

19.
对于量子卷积码理论的研究旨在保护长距离通信中的量子信息序列. 定义了量子态的多项式表示形式,根据Calderbank-Shor-Steane(CSS)型量子码的构造方法,给出了CSS型量子卷积码的一种新的编译码方法,描述了编译码网络. 该方法将码字基态变换为信息多项式与生成多项式的乘积,然后用量子态上的多项式乘法操作实现编译码网络. 最后借鉴经典卷积码的译码思想,给出了具有线性复杂度的量子Viterbi算法. 关键词: 量子信息 量子卷积码 编译码 纠错算法  相似文献   

20.
Power law tails in the Italian personal income distribution   总被引:3,自引:0,他引:3  
F. Clementi  M. Gallegati   《Physica A》2005,350(2-4):427-438
We investigate the shape of the Italian personal income distribution using microdata from the Survey on Household Income and Wealth, made publicly available by the Bank of Italy for the years 1977–2002. We find that the upper tail of the distribution is consistent with a Pareto-power law type distribution, while the rest follows a two-parameter lognormal distribution. The results of our analysis show a shift of the distribution and a change of the indexes specifying it over time. As regards the first issue, we test the hypothesis that the evolution of both gross domestic product and personal income is governed by similar mechanisms, pointing to the existence of correlation between these quantities. The fluctuations of the shape of income distribution are instead quantified by establishing some links with the business cycle phases experienced by the Italian economy over the years covered by our dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号