首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Quantum error correcting codes (QECCs) play an important role in preventing quantum information decoherence. Good quantum stabilizer codes were constructed by classical error correcting codes. In this paper, Bose–Chaudhuri–Hocquenghem (BCH) codes over finite fields are used to construct quantum codes. First, we try to find such classical BCH codes, which contain their dual codes, by studying the suitable cyclotomic cosets. Then, we construct nonbinary quantum BCH codes with given parameter sets. Finally, a new family of quantum BCH codes can be realized by Steane’s enlargement of nonbinary Calderbank-Shor-Steane (CSS) construction and Hermitian construction. We have proven that the cyclotomic cosets are good tools to study quantum BCH codes. The defining sets contain the highest numbers of consecutive integers. Compared with the results in the references, the new quantum BCH codes have better code parameters without restrictions and better lower bounds on minimum distances. What is more, the new quantum codes can be constructed over any finite fields, which enlarges the range of quantum BCH codes.  相似文献   

2.
Polar coding gives rise to the first explicit family of codes that provably achieve capacity with efficient encoding and decoding for a wide range of channels. However, its performance at short blocklengths under standard successive cancellation decoding is far from optimal. A well-known way to improve the performance of polar codes at short blocklengths is CRC precoding followed by successive-cancellation list decoding. This approach, along with various refinements thereof, has largely remained the state of the art in polar coding since it was introduced in 2011. Recently, Arıkan presented a new polar coding scheme, which he called polarization-adjusted convolutional (PAC) codes. At short blocklengths, such codes offer a dramatic improvement in performance as compared to CRC-aided list decoding of conventional polar codes. PAC codes are based primarily upon the following main ideas: replacing CRC codes with convolutional precoding (under appropriate rate profiling) and replacing list decoding by sequential decoding. One of our primary goals in this paper is to answer the following question: is sequential decoding essential for the superior performance of PAC codes? We show that similar performance can be achieved using list decoding when the list size L is moderately large (say, L128). List decoding has distinct advantages over sequential decoding in certain scenarios, such as low-SNR regimes or situations where the worst-case complexity/latency is the primary constraint. Another objective is to provide some insights into the remarkable performance of PAC codes. We first observe that both sequential decoding and list decoding of PAC codes closely match ML decoding thereof. We then estimate the number of low weight codewords in PAC codes, and use these estimates to approximate the union bound on their performance. These results indicate that PAC codes are superior to both polar codes and Reed–Muller codes. We also consider random time-varying convolutional precoding for PAC codes, and observe that this scheme achieves the same superior performance with constraint length as low as ν=2.  相似文献   

3.
The asymmetric skew divergence smooths one of the distributions by mixing it, to a degree determined by the parameter λ, with the other distribution. Such divergence is an approximation of the KL divergence that does not require the target distribution to be absolutely continuous with respect to the source distribution. In this paper, an information geometric generalization of the skew divergence called the α-geodesical skew divergence is proposed, and its properties are studied.  相似文献   

4.
Polarization adjusted convolutional (PAC) codes are a class of codes that combine channel polarization with convolutional coding. PAC codes are of interest for their high performance. This paper presents a systematic encoding and shortening method for PAC codes. Systematic encoding is important for lowering the bit-error rate (BER) of PAC codes. Shortening is important for adjusting the block length of PAC codes. It is shown that systematic encoding and shortening of PAC codes can be carried out in a unified framework.  相似文献   

5.
Skew orthogonal polynomials arise in the calculation of the n-point distribution function for the eigenvalues of ensembles of random matrices with orthogonal or symplectic symmetry. In particular, the distribution functions are completely determined by a certain sum involving the skew orthogonal polynomials. In the case that the eigenvalue probability density function involves a classical weight function, explicit formulas for the skew orthogonal polynomials are given in terms of related orthogonal polynomials, and the structure is used to give a closed-form expression for the sum. This theory treates all classical cases on an equal footing, giving formulas applicable at once to the Hermite, Laguerre, and Jacobi cases.  相似文献   

6.
7.
The mutual orientation of pigments in living organisms, for example the antenna pigments in photosynthetizing Organisms, have strong influence on molecular processes such as excitation energy transfer which are closely related to physiological function of photosynthetizing apparatus of plants, algae and bacteria [l].  相似文献   

8.
The dynamics of the skew information (SI) is investigated for a single Cooper Pair Box (CPB) interacting with a single cavity field. By suitably choosing the system parameters and precisely controlling the dynamics, novel connection is found between the SI and entanglement generation. It is shown that SI can be increased and reach its maximum value either by increasing the number of photons inside the cavity or considering the far off-resonant case.The number of oscillations of SI is increased by decreasing this ratio between the Josephson junction capacity and the gate capacity. This leads to significant improvement of the travelling time between the maximum and minimum values.  相似文献   

9.
In this paper, the theoretical lower-bound on the success probability of blind reconstruction of Bose–Chaudhuri–Hocquenghem (BCH) codes is derived. In particular, the blind reconstruction method of BCH codes based on the consecutive roots of generator polynomials is mainly analyzed because this method shows the best blind reconstruction performance. In order to derive a performance lower-bound, the theoretical analysis of BCH codes on the aspects of blind reconstruction is performed. Furthermore, the analysis results can be applied not only to the binary BCH codes but also to the non-binary BCH codes including Reed–Solomon (RS) codes. By comparing the derived lower-bound with the simulation results, it is confirmed that the success probability of the blind reconstruction of BCH codes based on the consecutive roots of generator polynomials is well bounded by the proposed lower-bound.  相似文献   

10.
We present a universal framework for quantum error-correcting codes, i.e., a framework that applies to the most general quantum error-correcting codes. This framework is based on the group algebra, an algebraic notation associated with nice error bases of quantum systems. The nicest thing about this framework is that we can characterize the properties of quantum codes by the properties of the group algebra. We show how it characterizes the properties of quantum codes as well as generates some new results about quantum codes.  相似文献   

11.
This paper deals with the specific construction of binary low-density parity-check (LDPC) codes. We derive lower bounds on the error exponents for these codes transmitted over the memoryless binary symmetric channel (BSC) for both the well-known maximum-likelihood (ML) and proposed low-complexity decoding algorithms. We prove the existence of such LDPC codes that the probability of erroneous decoding decreases exponentially with the growth of the code length while keeping coding rates below the corresponding channel capacity. We also show that an obtained error exponent lower bound under ML decoding almost coincide with the error exponents of good linear codes.  相似文献   

12.
In our previous paper [1], a novel CPCD technique has been introduced to significantly improve decoding of LDPC codes over the known sum product algorithm (SPA) decoding. However, results presented in [1] were limited to shorter low density parity check (LDPC) codes and transmission over an addition white Gaussian noise (AWGN) channels using QPSK modulation. In this study, we demonstrate that CPCD can achieve significant gains regardless of the length of the code or the modulation technique used for transmission or the type of the channel including fading channels. In addition, a novel turbo-CPCD technique that follows the principle of turbo LDPC is introduced. It is shown here that CPCD and turbo-CPCD can perform about 0.21.5 dB better than SPA decoding and turbo LDPC codes.  相似文献   

13.
A general framework describing the statistical discrimination of an ensemble of quantum channels is given by the name quantum reading. Several tools can be applied in quantum reading to reduce the error probability in distinguishing the ensemble of channels. Classical and quantum codes can be envisioned for this goal. The aim of this paper is to present a simple but fruitful protocol for this task using classical error-correcting codes. Three families of codes are considered: Reed–Solomon codes, BCH codes, and Reed–Muller codes. In conjunction with the use of codes, we also analyze the role of the receiver. In particular, heterodyne and Dolinar receivers are taken into consideration. The encoding and measurement schemes are connected by the probing step. As probes, we consider coherent states. In such a simple manner, interesting results are obtained. As we show, there is a threshold below which using codes surpass optimal and sophisticated schemes for any fixed rate and code. BCH codes in conjunction with Dolinar receiver turn out to be the optimal strategy for error mitigation in quantum reading.  相似文献   

14.
For high-dimensional data such as images, learning an encoder that can output a compact yet informative representation is a key task on its own, in addition to facilitating subsequent processing of data. We present a model that produces discrete infomax codes (DIMCO); we train a probabilistic encoder that yields k-way d-dimensional codes associated with input data. Our model maximizes the mutual information between codes and ground-truth class labels, with a regularization which encourages entries of a codeword to be statistically independent. In this context, we show that the infomax principle also justifies existing loss functions, such as cross-entropy as its special cases. Our analysis also shows that using shorter codes reduces overfitting in the context of few-shot classification, and our various experiments show this implicit task-level regularization effect of DIMCO. Furthermore, we show that the codes learned by DIMCO are efficient in terms of both memory and retrieval time compared to prior methods.  相似文献   

15.
Multi-focus-image-fusion is a crucial embranchment of image processing. Many methods have been developed from different perspectives to solve this problem. Among them, the sparse representation (SR)-based and convolutional neural network (CNN)-based fusion methods have been widely used. Fusing the source image patches, the SR-based model is essentially a local method with a nonlinear fusion rule. On the other hand, the direct mapping between the source images follows the decision map which is learned via CNN. The fusion is a global one with a linear fusion rule. Combining the advantages of the above two methods, a novel fusion method that applies CNN to assist SR is proposed for the purpose of gaining a fused image with more precise and abundant information. In the proposed method, source image patches were fused based on SR and the new weight obtained by CNN. Experimental results demonstrate that the proposed method clearly outperforms existing state-of-the-art methods in addition to SR and CNN in terms of both visual perception and objective evaluation metrics, and the computational complexity is greatly reduced. Experimental results demonstrate that the proposed method not only clearly outperforms the SR and CNN methods in terms of visual perception and objective evaluation indicators, but is also significantly better than other state-of-the-art methods since our computational complexity is greatly reduced.  相似文献   

16.
Polar codes are a relatively new family of linear block codes which have garnered a lot of attention from the scientific community, owing to their low-complexity implementation and provably capacity achieving capability. They have been proposed to be used for encoding information on the control channels in 5G wireless networks due to their robustness for short codeword lengths. The basic approach introduced by Arikan can only be used to generate polar codes of length N=2n, nN. To overcome this limitation, polarization kernels of size larger than 2×2 (like 3×3, 4×4, and so on), have already been proposed in the literature. Additionally, kernels of different sizes can also be combined together to generate multi-kernel polar codes, further improving the flexibility of codeword lengths. These techniques undoubtedly improve the usability of polar codes for various practical implementations. However, with the availability of so many design options and parameters, designing polar codes that are optimally tuned to specific underlying system requirements becomes extremely challenging, since a variation in system parameters can result in a different choice of polarization kernel. This necessitates a structured design technique for optimal polarization circuits. We developed the DTS-parameter to quantify the best rate-matched polar codes. Thereafter, we developed and formalized a recursive technique to design polarization kernels of higher order from component smaller order. A scaled version of the DTS-parameter, namely SDTS-parameter (denoted by the symbol ζ in this article) was used for the analytical assessment of this construction technique and validated for single-kernel polar codes. In this paper, we aim to extend the analysis of the aforementioned SDTS parameter for multi-kernel polar codes and validate their applicability in this domain as well.  相似文献   

17.
Although long polar codes with successive cancellation decoding can asymptotically achieve channel capacity, the performance of short blocklength polar codes is far from optimal. Recently, Arıkan proposed employing a convolutional pre-transformation before the polarization network, called polarization-adjusted convolutional (PAC) codes. In this paper, we focus on improving the performance of short PAC codes concatenated with a cyclic redundancy check (CRC) outer code, CRC-PAC codes, since error detection capability is essential in practical applications, such as the polar coding scheme for the control channel. We propose an enhanced adaptive belief propagation (ABP) decoding algorithm with the assistance of CRC bits for PAC codes. We also derive joint parity-check matrices of CRC-PAC codes suitable for iterative BP decoding. The proposed CRC-aided ABP (CA-ABP) decoding can effectively improve error performance when partial CRC bits are used in the decoding. Meanwhile, the error detection ability can still be guaranteed by the remaining CRC bits and adaptive decoding parameters. Moreover, compared with the conventional CRC-aided list (CA-List) decoding, our proposed scheme can significantly reduce computational complexity, to achieve a better trade-off between the performance and complexity for short PAC codes.  相似文献   

18.
密钥协商是量子密钥分配(QKD)的重要环节,影响着QKD的密钥率和安全距离.作为一种低信噪比时较为高效的密钥协商方案,多维协商算法被很好地应用在高斯调制连续变量QKD中,延长了通信距离.本文研究了二进制LDPC码在多维协商算法中的应用方案,进而扩展到多进制LDPC码.仿真表明,相比二进制LDPC码,利用多进制LDPC码能够使多维协商性能获得明显增益.  相似文献   

19.
One of the most effective image processing techniques is the use of convolutional neural networks that use convolutional layers. In each such layer, the value of the layer’s output signal at each point is a combination of the layer’s input signals corresponding to several neighboring points. To improve the accuracy, researchers have developed a version of this technique, in which only data from some of the neighboring points is processed. It turns out that the most efficient case—called dilated convolution—is when we select the neighboring points whose differences in both coordinates are divisible by some constant . In this paper, we explain this empirical efficiency by proving that for all reasonable optimality criteria, dilated convolution is indeed better than possible alternatives.  相似文献   

20.
Speaker recognition is an important classification task, which can be solved using several approaches. Although building a speaker recognition model on a closed set of speakers under neutral speaking conditions is a well-researched task and there are solutions that provide excellent performance, the classification accuracy of developed models significantly decreases when applying them to emotional speech or in the presence of interference. Furthermore, deep models may require a large number of parameters, so constrained solutions are desirable in order to implement them on edge devices in the Internet of Things systems for real-time detection. The aim of this paper is to propose a simple and constrained convolutional neural network for speaker recognition tasks and to examine its robustness for recognition in emotional speech conditions. We examine three quantization methods for developing a constrained network: floating-point eight format, ternary scalar quantization, and binary scalar quantization. The results are demonstrated on the recently recorded SEAC dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号