首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Polar coding gives rise to the first explicit family of codes that provably achieve capacity with efficient encoding and decoding for a wide range of channels. However, its performance at short blocklengths under standard successive cancellation decoding is far from optimal. A well-known way to improve the performance of polar codes at short blocklengths is CRC precoding followed by successive-cancellation list decoding. This approach, along with various refinements thereof, has largely remained the state of the art in polar coding since it was introduced in 2011. Recently, Arıkan presented a new polar coding scheme, which he called polarization-adjusted convolutional (PAC) codes. At short blocklengths, such codes offer a dramatic improvement in performance as compared to CRC-aided list decoding of conventional polar codes. PAC codes are based primarily upon the following main ideas: replacing CRC codes with convolutional precoding (under appropriate rate profiling) and replacing list decoding by sequential decoding. One of our primary goals in this paper is to answer the following question: is sequential decoding essential for the superior performance of PAC codes? We show that similar performance can be achieved using list decoding when the list size L is moderately large (say, L128). List decoding has distinct advantages over sequential decoding in certain scenarios, such as low-SNR regimes or situations where the worst-case complexity/latency is the primary constraint. Another objective is to provide some insights into the remarkable performance of PAC codes. We first observe that both sequential decoding and list decoding of PAC codes closely match ML decoding thereof. We then estimate the number of low weight codewords in PAC codes, and use these estimates to approximate the union bound on their performance. These results indicate that PAC codes are superior to both polar codes and Reed–Muller codes. We also consider random time-varying convolutional precoding for PAC codes, and observe that this scheme achieves the same superior performance with constraint length as low as ν=2.  相似文献   

2.
An end-to-end joint source–channel (JSC) encoding matrix and a JSC decoding scheme using the proposed bit flipping check (BFC) algorithm and controversial variable node selection-based adaptive belief propagation (CVNS-ABP) decoding algorithm are presented to improve the efficiency and reliability of the joint source–channel coding (JSCC) scheme based on double Reed–Solomon (RS) codes. The constructed coding matrix can realize source compression and channel coding of multiple sets of information data simultaneously, which significantly improves the coding efficiency. The proposed BFC algorithm uses channel soft information to select and flip the unreliable bits and then uses the redundancy of the source block to realize the error verification and error correction. The proposed CVNS-ABP algorithm reduces the influence of error bits on decoding by selecting error variable nodes (VNs) from controversial VNs and adding them to the sparsity of the parity-check matrix. In addition, the proposed JSC decoding scheme based on the BFC algorithm and CVNS-ABP algorithm can realize the connection of source and channel to improve the performance of JSC decoding. Simulation results show that the proposed BFC-based hard-decision decoding (BFC-HDD) algorithm (ζ = 1) and BFC-based low-complexity chase (BFC-LCC) algorithm (ζ = 1, η = 3) can achieve about 0.23 dB and 0.46 dB of signal-to-noise ratio (SNR) defined gain over the prior-art decoding algorithm at a frame error rate (FER) = 101. Compared with the ABP algorithm, the proposed CVNS-ABP algorithm and BFC-CVNS-ABP algorithm achieve performance gains of 0.18 dB and 0.23 dB, respectively, at FER = 103.  相似文献   

3.
Since the grand partition function Zq for the so-called q-particles (i.e., quons), q(1,1), cannot be computed by using the standard 2nd quantisation technique involving the full Fock space construction for q=0, and its q-deformations for the remaining cases, we determine such grand partition functions in order to obtain the natural generalisation of the Plank distribution to q[1,1]. We also note the (non) surprising fact that the right grand partition function concerning the Boltzmann case (i.e., q=0) can be easily obtained by using the full Fock space 2nd quantisation, by considering the appropriate correction by the Gibbs factor 1/n! in the n term of the power series expansion with respect to the fugacity z. As an application, we briefly discuss the equations of the state for a gas of free quons or the condensation phenomenon into the ground state, also occurring for the Bose-like quons q(0,1).  相似文献   

4.
5.
6.
The evolution of the scoring performance of Rugby Union players is investigated over the seven rugby world cups (RWC) that took place from 1987 to 2011, and a specific attention is given to how they may have been impacted by the switch from amateurism to professionalism that occurred in 1995. The distribution of the points scored by individual players, PsPs, ranked in order of performance were well described by the simplified canonical law Ps∝(r+?)−αPs(r+?)α, where rr is the rank, and ?? and αα are the parameters of the distribution. The parameter αα did not significantly change from 1987 to 2007 (α=0.92±0.03)(α=0.92±0.03), indicating a negligible effect of professionalism on players’ scoring performance. In contrast, the parameter ?? significantly increased from ?=1.32?=1.32 for 1987 RWC, ?=2.30?=2.30 for 1999 to 2003 RWC and ?=5.60?=5.60 for 2007 RWC, suggesting a progressive decrease in the relative performance of the best players. Finally, the sharp decreases observed in both α(α=0.38)α(α=0.38) and ?(?=0.70)?(?=0.70) in the 2011 RWC indicate a more even distribution of the performance of individuals among scorers, compared to the more heterogeneous distributions observed from 1987 to 2007, and suggest a sharp increase in the level of competition leading to an increase in the average quality of players and a decrease in the relative skills of the top players. Note that neither αα nor ?? significantly correlate with traditional performance indicators such as the number of points scored by the best players, the number of games played by the best players, the number of points scored by the team of the best players or the total number of points scored over each RWC. This indicates that the dynamics of the scoring performance of Rugby Union players is influenced by hidden processes hitherto inaccessible through standard performance metrics; this suggests that players’ scoring performance is connected to ubiquitous phenomena such as anomalous diffusion.  相似文献   

7.
8.
9.
10.
A unipolar electrohydrodynamic (UP-EHD) pump flow is studied with known electric potential at the emitter and zero electric potential at the collector. The model is designed for electric potential, charge density, and electric field. The dimensionless parameters, namely the electrical source number (Es), the electrical Reynolds number (ReE), and electrical slip number (Esl), are considered with wide ranges of variation to analyze the UP-EHD pump flow. To interpret the pump flow of the UP-EHD model, a hybrid metaheuristic solver is designed, consisting of the recently developed technique sine–cosine algorithm (SCA) and sequential quadratic programming (SQP) under the influence of an artificial neural network. The method is abbreviated as ANN-SCA-SQP. The superiority of the technique is shown by comparing the solution with reference solutions. For a large data set, the technique is executed for one hundred independent experiments. The performance is evaluated through performance operators and convergence plots.  相似文献   

11.
We study the viable Starobinsky f(R) dark energy model in spatially non-flat FLRW backgrounds, where f(R)=RλRch[1(1+R2/Rch2)1] with Rch and λ representing the characteristic curvature scale and model parameter, respectively. We modify CAMB and CosmoMC packages with the recent observational data to constrain Starobinsky f(R) gravity and the density parameter of curvature ΩK. In particular, we find the model and density parameters to be λ1<0.283 at 68% C.L. and ΩK=0.000990.0042+0.0044 at 95% C.L., respectively. The best χ2 fitting result shows that χf(R)2χΛCDM2, indicating that the viable f(R) gravity model is consistent with ΛCDM when ΩK is set as a free parameter. We also evaluate the values of AIC, BIC and DIC for the best fitting results of f(R) and ΛCDM models in the non-flat universe.  相似文献   

12.
Recently, Savaré-Toscani proved that the Rényi entropy power of general probability densities solving the p-nonlinear heat equation in Rn is a concave function of time under certain conditions of three parameters n,p,μ , which extends Costa’s concavity inequality for Shannon’s entropy power to the Rényi entropy power. In this paper, we give a condition Φ(n,p,μ) of n,p,μ under which the concavity of the Rényi entropy power is valid. The condition Φ(n,p,μ) contains Savaré-Toscani’s condition as a special case and much more cases. Precisely, the points (n,p,μ) satisfying Savaré-Toscani’s condition consist of a two-dimensional subset of R3 , and the points satisfying the condition Φ(n,p,μ) consist a three-dimensional subset of R3 . Furthermore, Φ(n,p,μ) gives the necessary and sufficient condition in a certain sense. Finally, the conditions are obtained with a systematic approach.  相似文献   

13.
We use an m-vicinity method to examine Ising models on hypercube lattices of high dimensions d3. This method is applicable for both short-range and long-range interactions. We introduce a small parameter, which determines whether the method can be used when calculating the free energy. When we account for interaction with the nearest neighbors only, the value of this parameter depends on the dimension of the lattice d. We obtain an expression for the critical temperature in terms of the interaction constants that is in a good agreement with the results of computer simulations. For d=5,6,7, our theoretical estimates match the numerical results both qualitatively and quantitatively. For d=3,4, our method is sufficiently accurate for the calculation of the critical temperatures; however, it predicts a finite jump of the heat capacity at the critical point. In the case of the three-dimensional lattice (d=3), this contradicts the commonly accepted ideas of the type of the singularity at the critical point. For the four-dimensional lattice (d=4), the character of the singularity is under current discussion. For the dimensions d=1, 2 the m-vicinity method is not applicable.  相似文献   

14.
Detrended Fluctuation Analysis (DFA) has become a standard method to quantify the correlations and scaling properties of real-world complex time series. For a given scale of observation, DFA provides the function F(), which quantifies the fluctuations of the time series around the local trend, which is substracted (detrended). If the time series exhibits scaling properties, then F()α asymptotically, and the scaling exponent α is typically estimated as the slope of a linear fitting in the logF() vs. log() plot. In this way, α measures the strength of the correlations and characterizes the underlying dynamical system. However, in many cases, and especially in a physiological time series, the scaling behavior is different at short and long scales, resulting in logF() vs. log() plots with two different slopes, α1 at short scales and α2 at large scales of observation. These two exponents are usually associated with the existence of different mechanisms that work at distinct time scales acting on the underlying dynamical system. Here, however, and since the power-law behavior of F() is asymptotic, we question the use of α1 to characterize the correlations at short scales. To this end, we show first that, even for artificial time series with perfect scaling, i.e., with a single exponent α valid for all scales, DFA provides an α1 value that systematically overestimates the true exponent α. In addition, second, when artificial time series with two different scaling exponents at short and large scales are considered, the α1 value provided by DFA not only can severely underestimate or overestimate the true short-scale exponent, but also depends on the value of the large scale exponent. This behavior should prevent the use of α1 to describe the scaling properties at short scales: if DFA is used in two time series with the same scaling behavior at short scales but very different scaling properties at large scales, very different values of α1 will be obtained, although the short scale properties are identical. These artifacts may lead to wrong interpretations when analyzing real-world time series: on the one hand, for time series with truly perfect scaling, the spurious value of α1 could lead to wrongly thinking that there exists some specific mechanism acting only at short time scales in the dynamical system. On the other hand, for time series with true different scaling at short and large scales, the incorrect α1 value would not characterize properly the short scale behavior of the dynamical system.  相似文献   

15.
In this work, first, we consider novel parameterized identities for the left and right part of the (p,q)-analogue of Hermite–Hadamard inequality. Second, using these new parameterized identities, we give new parameterized (p,q)-trapezoid and parameterized (p,q)-midpoint type integral inequalities via η-quasiconvex function. By changing values of parameter μ[0,1], some new special cases from the main results are obtained and some known results are recaptured as well. Finally, at the end, an application to special means is given as well. This new research has the potential to establish new boundaries in comparative literature and some well-known implications. From an application perspective, the proposed research on the η-quasiconvex function has interesting results that illustrate the applicability and superiority of the results obtained.  相似文献   

16.
In solving challenging pattern recognition problems, deep neural networks have shown excellent performance by forming powerful mappings between inputs and targets, learning representations (features) and making subsequent predictions. A recent tool to help understand how representations are formed is based on observing the dynamics of learning on an information plane using mutual information, linking the input to the representation (I(X;T)) and the representation to the target (I(T;Y)). In this paper, we use an information theoretical approach to understand how Cascade Learning (CL), a method to train deep neural networks layer-by-layer, learns representations, as CL has shown comparable results while saving computation and memory costs. We observe that performance is not linked to information–compression, which differs from observation on End-to-End (E2E) learning. Additionally, CL can inherit information about targets, and gradually specialise extracted features layer-by-layer. We evaluate this effect by proposing an information transition ratio, I(T;Y)/I(X;T), and show that it can serve as a useful heuristic in setting the depth of a neural network that achieves satisfactory accuracy of classification.  相似文献   

17.
18.
This study deals with drift parameters estimation problems in the sub-fractional Vasicek process given by dxt=θ(μxt)dt+dStH, with θ>0, μR being unknown and t0; here, SH represents a sub-fractional Brownian motion (sfBm). We introduce new estimators θ^ for θ and μ^ for μ based on discrete time observations and use techniques from Nordin–Peccati analysis. For the proposed estimators θ^ and μ^, strong consistency and the asymptotic normality were established by employing the properties of SH. Moreover, we provide numerical simulations for sfBm and related Vasicek-type process with different values of the Hurst index H.  相似文献   

19.
Private Information Retrieval (PIR) protocols, which allow the client to obtain data from servers without revealing its request, have many applications such as anonymous communication, media streaming, blockchain security, advertisement, etc. Multi-server PIR protocols, where the database is replicated among the non-colluding servers, provide high efficiency in the information-theoretic setting. Beimel et al. in CCC 12’ (further referred to as BIKO) put forward a paradigm for constructing multi-server PIR, capturing several previous constructions for k3 servers, as well as improving the best-known share complexity for 3-server PIR. A key component there is a share conversion scheme from corresponding linear three-party secret sharing schemes with respect to a certain type of “modified universal” relation. In a useful particular instantiation of the paradigm, they used a share conversion from (2,3)-CNF over Zm to three-additive sharing over Zpβ for primes p1,p2,p where p1p2 and m=p1·p2, and the relation is modified universal relation CSm. They reduced the question of the existence of the share conversion for a triple (p1,p2,p) to the (in)solvability of a certain linear system over Zp, and provided an efficient (in m,logp) construction of such a sharing scheme. Unfortunately, the size of the system is Θ(m2) which entails the infeasibility of a direct solution for big m’s in practice. Paskin-Cherniavsky and Schmerler in 2019 proved the existence of the conversion for the case of odd p1, p2 when p=p1, obtaining in this way infinitely many parameters for which the conversion exists, but also for infinitely many of them it remained open. In this work, using some algebraic techniques from the work of Paskin-Cherniavsky and Schmerler, we prove the existence of the conversion for even m’s in case p=2 (we computed β in this case) and the absence of the conversion for even m’s in case p>2. This does not improve the concrete efficiency of 3-server PIR; however, our result is promising in a broader context of constructing PIR through composition techniques with k3 servers, using the relation CSm where m has more than two prime divisors. Another our suggestion about 3-server PIR is that it’s possible to achieve a shorter server’s response using the relation CSm for extended SmSm. By computer search, in BIKO framework we found several such sets for small m’s which result in share conversion from (2,3)-CNF over Zm to 3-additive secret sharing over Zpβ, where β>0 is several times less than β, which implies several times shorter server’s response. We also suggest that such extended sets Sm can result in better PIR due to the potential existence of matching vector families with the higher Vapnik-Chervonenkis dimension.  相似文献   

20.
The discrepancy among one-electron and two-electron densities for diverse N-electron atomss, enclosing neutral systems (with nuclear charge Z=N) and charge-one ions (|NZ|=1), is quantified by means of mutual information, I, and Quantum Similarity Index, QSI, in the conjugate spaces position/momentum. These differences can be interpreted as a measure of the electron correlation of the system. The analysis is carried out by considering systems with a nuclear charge up to Z=103 and singly charged ions (cations and anions) as far as N=54. The interelectronic correlation, for any given system, is quantified through the comparison of its double-variable electron pair density and the product of the respective one-particle densities. An in-depth study along the Periodic Table reveals the importance, far beyond the weight of the systems considered, of their shell structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号