首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
A new method to obtain a series of reduced dynamics at various stages of coarse-graining is proposed. This ranges from the most coarse-grained one which agrees with the deterministic time evolution equation for averages of the relevant variables to the least coarse-grained one which is the generalized Fokker-Planck equation for the probability distribution function of the relevant variables. The method is based on the extention of the Kawasaki-Gunton operator with the help of the principle of maximum entropy.  相似文献   

3.
We formulate an elementary statistical game which captures the essence of some fundamental quantum experiments such as photon polarization and spin measurement. We explore and compare the significance of the principle of maximum Shannon entropy and the principle of minimum Fisher information in solving such a game. The solution based on the principle of minimum Fisher information coincides with the solution based on an invariance principle, and provides an informational explanation of Malus' law for photon polarization. There is no solution based on the principle of maximum Shannon entropy. The result demonstrates the merits of Fisher information, and the demerits of Shannon entropy, in treating some fundamental quantum problems. It also provides a quantitative example in support of a general philosophy: Nature intends to hide Fisher information, while obeying some simple rules.  相似文献   

4.
基于多检测器最大熵融合的多通道光谱图像异常检测   总被引:1,自引:1,他引:0  
《光子学报》2007,36(7):1338-1344
提出了一种多检测器最大熵融合的多通道光谱图像异常检测算法.选择多个不同的异常检测器,并利用自适应窗宽非参核密度估计方法估计其各自的输出分布,保留了多通道光谱图像数据的“长尾”特性,且避免了先验模型假设带来的模型误差.将各原始检测器的输出投影到具有标准正态边缘分布的变换空间中,利用变换空间中模型化的最大熵融合规则实现多检测器的决策级最优概率融合.在原数据空间通过似然函数的检验完成多通道光谱图像的目标检测.利用机载EPS-A航拍多通道光谱图像进行了实验,实验结果表明了算法的有效性.  相似文献   

5.
The probability density function (pdf) valid for the Gaussian case is often applied for describing the convolutional noise pdf in the blind adaptive deconvolution problem, although it is known that it can be applied only at the latter stages of the deconvolution process, where the convolutional noise pdf tends to be approximately Gaussian. Recently, the deconvolutional noise pdf was approximated with the Edgeworth Expansion and with the Maximum Entropy density function for the 16 Quadrature Amplitude Modulation (QAM) input but no equalization performance improvement was seen for the hard channel case with the equalization algorithm based on the Maximum Entropy density function approach for the convolutional noise pdf compared with the original Maximum Entropy algorithm, while for the Edgeworth Expansion approximation technique, additional predefined parameters were needed in the algorithm. In this paper, the Generalized Gaussian density (GGD) function and the Edgeworth Expansion are applied for approximating the convolutional noise pdf for the 16 QAM input case, with no need for additional predefined parameters in the obtained equalization method. Simulation results indicate that improved equalization performance is obtained from the convergence time point of view of approximately 15,000 symbols for the hard channel case with our new proposed equalization method based on the new model for the convolutional noise pdf compared to the original Maximum Entropy algorithm. By convergence time, we mean the number of symbols required to reach a residual inter-symbol-interference (ISI) for which reliable decisions can be made on the equalized output sequence.  相似文献   

6.
Entropy estimation faces numerous challenges when applied to various real-world problems. Our interest is in divergence and entropy estimation algorithms which are capable of rapid estimation for natural sequence data such as human and synthetic languages. This typically requires a large amount of data; however, we propose a new approach which is based on a new rank-based analytic Zipf–Mandelbrot–Li probabilistic model. Unlike previous approaches, which do not consider the nature of the probability distribution in relation to language; here, we introduce a novel analytic Zipfian model which includes linguistic constraints. This provides more accurate distributions for natural sequences such as natural or synthetic emergent languages. Results are given which indicates the performance of the proposed ZML model. We derive an entropy estimation method which incorporates the linguistic constraint-based Zipf–Mandelbrot–Li into a new non-equiprobable coincidence counting algorithm which is shown to be effective for tasks such as entropy rate estimation with limited data.  相似文献   

7.
Estimation of the probability density function from the statistical power moments presents a challenging nonlinear numerical problem posed by unbalanced nonlinearities, numerical instability and a lack of convergence, especially for larger numbers of moments. Despite many numerical improvements over the past two decades, the classical moment problem of maximum entropy (MaxEnt) is still a very demanding numerical and statistical task. Among others, it was presented how Fup basis functions with compact support can significantly improve the convergence properties of the mentioned nonlinear algorithm, but still, there is a lot of obstacles to an efficient pdf solution in different applied examples. Therefore, besides the mentioned classical nonlinear Algorithm 1, in this paper, we present a linear approximation of the MaxEnt moment problem as Algorithm 2 using exponential Fup basis functions. Algorithm 2 solves the linear problem, satisfying only the proposed moments, using an optimal exponential tension parameter that maximizes Shannon entropy. Algorithm 2 is very efficient for larger numbers of moments and especially for skewed pdfs. Since both Algorithms have pros and cons, a hybrid strategy is proposed to combine their best approximation properties.  相似文献   

8.
In this paper, we present a review of Shannon and differential entropy rate estimation techniques. Entropy rate, which measures the average information gain from a stochastic process, is a measure of uncertainty and complexity of a stochastic process. We discuss the estimation of entropy rate from empirical data, and review both parametric and non-parametric techniques. We look at many different assumptions on properties of the processes for parametric processes, in particular focussing on Markov and Gaussian assumptions. Non-parametric estimation relies on limit theorems which involve the entropy rate from observations, and to discuss these, we introduce some theory and the practical implementations of estimators of this type.  相似文献   

9.
The degradation and recovery processes are multi-scale phenomena in many physical, engineering, biological, and social systems, and determine the aging of the entire system. Therefore, understanding the interplay between the two processes at the component level is the key to evaluate the reliability of the system. Based on the principle of maximum entropy, an approach is proposed to model and infer the processes at the component level, and is applied to repairable and non-repairable systems. By incorporating the reliability block diagram, this approach allows for integrating the information of network connectivity and statistical moments to infer the hazard or recovery rates of the degradation or recovery processes. The overall approach is demonstrated with numerical examples.  相似文献   

10.
Bounded rationality is an important consideration stemming from the fact that agents often have limits on their processing abilities, making the assumption of perfect rationality inapplicable to many real tasks. We propose an information-theoretic approach to the inference of agent decisions under Smithian competition. The model explicitly captures the boundedness of agents (limited in their information-processing capacity) as the cost of information acquisition for expanding their prior beliefs. The expansion is measured as the Kullblack–Leibler divergence between posterior decisions and prior beliefs. When information acquisition is free, the homo economicus agent is recovered, while in cases when information acquisition becomes costly, agents instead revert to their prior beliefs. The maximum entropy principle is used to infer least biased decisions based upon the notion of Smithian competition formalised within the Quantal Response Statistical Equilibrium framework. The incorporation of prior beliefs into such a framework allowed us to systematically explore the effects of prior beliefs on decision-making in the presence of market feedback, as well as importantly adding a temporal interpretation to the framework. We verified the proposed model using Australian housing market data, showing how the incorporation of prior knowledge alters the resulting agent decisions. Specifically, it allowed for the separation of past beliefs and utility maximisation behaviour of the agent as well as the analysis into the evolution of agent beliefs.  相似文献   

11.
In this study, causalities of COVID-19 across a group of seventy countries are analyzed with effective transfer entropy. To reveal the causalities, a weighted directed network is constructed. In this network, the weights of the links reveal the strength of the causality which is obtained by calculating effective transfer entropies. Transfer entropy has some advantages over other causality evaluation methods. Firstly, transfer entropy can quantify the strength of the causality and secondly it can detect nonlinear causal relationships. After the construction of the causality network, it is analyzed with well-known network analysis methods such as eigenvector centrality, PageRank, and community detection. Eigenvector centrality and PageRank metrics reveal the importance and the centrality of each node country in the network. In community detection, node countries in the network are divided into groups such that countries in each group are much more densely connected.  相似文献   

12.
利用二维属性直方图的最大熵的图像分割方法   总被引:3,自引:1,他引:3  
提出二维属性直方图的概念。它是一种由先验知识约束的二维直方图,可以使一些图像处理方法得到简化和变得可行。在此基础上提出一种基于二维属性直方图的图像分割方法。该方法步骤是构造图像的属性集,确定相应的二维属性直方图,然后利用二维属性直方图的最大熵法确定灰度阈值。为了说明该方法的性能,将其用于一种海底小目标图像分割。同时,也使用一维属性直方图的最大熵分割法。结果表明该方法比一维属性直方图的最大熵法抗干扰性更强,分割效果更好。二维属性直方图的概念具有理论意义与应用价值。该方法适用于图像有某种先验知识的场合。  相似文献   

13.
Background: For the kinetic models used in contrast-based medical imaging, the assignment of the arterial input function named AIF is essential for the estimation of the physiological parameters of the tissue via solving an optimization problem. Objective: In the current study, we estimate the AIF relayed on the modified maximum entropy method. The effectiveness of several numerical methods to determine kinetic parameters and the AIF is evaluated—in situations where enough information about the AIF is not available. The purpose of this study is to identify an appropriate method for estimating this function. Materials and Methods: The modified algorithm is a mixture of the maximum entropy approach with an optimization method, named the teaching-learning method. In here, we applied this algorithm in a Bayesian framework to estimate the kinetic parameters when specifying the unique form of the AIF by the maximum entropy method. We assessed the proficiency of the proposed method for assigning the kinetic parameters in the dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), when determining AIF with some other parameter-estimation methods and a standard fixed AIF method. A previously analyzed dataset consisting of contrast agent concentrations in tissue and plasma was used. Results and Conclusions: We compared the accuracy of the results for the estimated parameters obtained from the MMEM with those of the empirical method, maximum likelihood method, moment matching (“method of moments”), the least-square method, the modified maximum likelihood approach, and our previous work. Since the current algorithm does not have the problem of starting point in the parameter estimation phase, it could find the best and nearest model to the empirical model of data, and therefore, the results indicated the Weibull distribution as an appropriate and robust AIF and also illustrated the power and effectiveness of the proposed method to estimate the kinetic parameters.  相似文献   

14.
饶光辉 《物理》1996,25(10):595-601
从粉末衍射数据直接测定晶体结构介材料和晶体学研究的热门课题之一。文章介绍了粉末衍射结构的最大熵法。最大熵法是基于信息论的最大熵原理和最大似然原理的一种方法。由于其独特的优点,最大熵法是最有前景的粉末衍射结构分析方法之一。  相似文献   

15.
一种少投影光学层析新算法及其应用   总被引:13,自引:5,他引:8  
万雄  何兴道  高益庆 《光学学报》2003,23(12):433-1438
研究少投影数情况下等离子体温度场重建问题。结合光学层析重建算法及等离子体光谱诊断中的谱线绝对强度法进行自由电弧等离子体温度场重建实验。理论上,详细讨论了一种基于最大熵准则及最优化原理的光学层析图像重建新算法。通过计算机数值模拟,考察了该算法对非对称温度场分布的重建效果。详细分析了投影噪声、投影方向数、场分布性质对重建精度的影响,并与代数迭代重建算法结果进行对比.结果表明,该算法以两个正交方向投影数据重建单峰余弦模拟场平均误差仅为0.3%,而代数迭代重建算法为3.81%;该算法以四个均匀角度间隔投影数据重建三峰随机高斯模拟场平均误差为1.77%,而代数迭代重建算法为2.02%。实验中,运用该算法结合谱线绝对强度法重建了自由电弧等离子体的温度分布。  相似文献   

16.
茶作为世界最受欢迎的三大饮料之一,不仅能够提神醒脑,而且还有帮助消化和降低血压等作用。随着人们对茶叶品质要求的日益提高,需要对不同品种的茶叶实现准确的鉴别分析以防止茶叶市场里茶叶品牌名不副实和以次充好等现象的发生。为实现对茶叶快速精准的鉴别分析,设计了一种综合采用傅里叶近红外光谱和新的模糊极大熵聚类(FEC)分析算法的茶叶品种鉴别系统。传统模糊极大熵聚类分析在聚类含噪声数据时,聚类结果往往容易出现错误,即FEC对噪声数据敏感。为解决这个问题,在FEC分析算法的基础上引入可能C均值聚类分析(PCM),提出了一种混合模糊极大熵聚类(MFEC)分析算法。MFEC可通过迭代计算得到模糊隶属度值,能实现对含噪声的茶叶傅里叶近红外光谱数据的准确聚类分析。首先,使用傅里叶近红外光谱仪(Antaris Ⅱ型)采集岳西翠兰、六安瓜片、施集毛峰三种安徽茶叶的傅里叶近红外光谱数据,光谱波数范围为10 000~4 000 cm-1。其次,对采集到的光谱数据使用多元散射校正(MSC)进行预处理,预处理后先用主成分分析(PCA)将光谱数据维数降至10维,然后再用线性判别分析(LDA)对降维后的近红外光谱数据进行特征提取。最后,通过混合模糊极大熵聚类分析和传统的模糊极大熵聚类分析对三种茶叶的光谱数据进行聚类分析,并对两种聚类分析算法得到的聚类准确率、收敛速度等进行对比分析。实验结果表明:混合模糊极大熵聚类(MFEC)分析算法与传统的模糊极大熵聚类(FEC)分析算法相比较,在相同的权重指数m下MFEC具有更高的聚类准确率。在m=2条件下,MFEC的聚类准确率达到了100%,而传统的模糊极大熵聚类在相同条件下聚类准确率仅为37.98%。MFEC收敛过程中仅需迭代10次即可达到收敛,而FEC需要迭代100次,因此MFEC可以更高效的进行模糊聚类分析,MFEC相比于FEC聚类性能具有明显的优越性。通过傅里叶近红外光谱技术,混合模糊极大熵聚类分析结合PCA与LDA算法构建的茶叶品种鉴别系统能够高效快速的完成对岳西翠兰、六安瓜片、施集毛峰三种茶叶的准确分类,为茶叶检测领域提供了一种创新的方法与设计思路,具有一定的理论价值和良好的市场应用前景。  相似文献   

17.
18.
Shannon’s entropy is one of the building blocks of information theory and an essential aspect of Machine Learning (ML) methods (e.g., Random Forests). Yet, it is only finitely defined for distributions with fast decaying tails on a countable alphabet. The unboundedness of Shannon’s entropy over the general class of all distributions on an alphabet prevents its potential utility from being fully realized. To fill the void in the foundation of information theory, Zhang (2020) proposed generalized Shannon’s entropy, which is finitely defined everywhere. The plug-in estimator, adopted in almost all entropy-based ML method packages, is one of the most popular approaches to estimating Shannon’s entropy. The asymptotic distribution for Shannon’s entropy’s plug-in estimator was well studied in the existing literature. This paper studies the asymptotic properties for the plug-in estimator of generalized Shannon’s entropy on countable alphabets. The developed asymptotic properties require no assumptions on the original distribution. The proposed asymptotic properties allow for interval estimation and statistical tests with generalized Shannon’s entropy.  相似文献   

19.
In the cybersecurity field, the generation of random numbers is extremely important because they are employed in different applications such as the generation/derivation of cryptographic keys, nonces, and initialization vectors. The more unpredictable the random sequence, the higher its quality and the lower the probability of recovering the value of those random numbers for an adversary. Cryptographically Secure Pseudo-Random Number Generators (CSPRNGs) are random number generators (RNGs) with specific properties and whose output sequence has such a degree of randomness that it cannot be distinguished from an ideal random sequence. In this work, we designed an all-digital RNG, which includes a Deterministic Random Bit Generator (DRBG) that meets the security requirements for cryptographic applications as CSPRNG, plus an entropy source that showed high portability and a high level of entropy. The proposed design has been intensively tested against both NIST and BSI suites to assess its entropy and randomness, and it is ready to be integrated into the European Processor Initiative (EPI) chip.  相似文献   

20.
The financial market is a complex system in which the assets influence each other, causing, among other factors, price interactions and co-movement of returns. Using the Maximum Entropy Principle approach, we analyze the interactions between a selected set of stock assets and equity indices under different high and low return volatility episodes at the 2008 Subprime Crisis and the 2020 COVID-19 outbreak. We carry out an inference process to identify the interactions, in which we implement the a pairwise Ising distribution model describing the first and second moments of the distribution of the discretized returns of each asset. Our results indicate that second-order interactions explain more than 80% of the entropy in the system during the Subprime Crisis and slightly higher than 50% during the COVID-19 outbreak independently of the period of high or low volatility analyzed. The evidence shows that during these periods, slight changes in the second-order interactions are enough to induce large changes in assets correlations but the proportion of positive and negative interactions remains virtually unchanged. Although some interactions change signs, the proportion of these changes are the same period to period, which keeps the system in a ferromagnetic state. These results are similar even when analyzing triadic structures in the signed network of couplings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号