首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到13条相似文献,搜索用时 15 毫秒
1.
In this work, we first consider the discrete version of Fisher information measure and then propose Jensen–Fisher information, to develop some associated results. Next, we consider Fisher information and Bayes–Fisher information measures for mixing parameter vector of a finite mixture probability mass function and establish some results. We provide some connections between these measures with some known informational measures such as chi-square divergence, Shannon entropy, Kullback–Leibler, Jeffreys and Jensen–Shannon divergences.  相似文献   

2.
Belavkin–Staszewski relative entropy can naturally characterize the effects of the possible noncommutativity of quantum states. In this paper, two new conditional entropy terms and four new mutual information terms are first defined by replacing quantum relative entropy with Belavkin–Staszewski relative entropy. Next, their basic properties are investigated, especially in classical-quantum settings. In particular, we show the weak concavity of the Belavkin–Staszewski conditional entropy and obtain the chain rule for the Belavkin–Staszewski mutual information. Finally, the subadditivity of the Belavkin–Staszewski relative entropy is established, i.e., the Belavkin–Staszewski relative entropy of a joint system is less than the sum of that of its corresponding subsystems with the help of some multiplicative and additive factors. Meanwhile, we also provide a certain subadditivity of the geometric Rényi relative entropy.  相似文献   

3.
A geometrical formulation of estimation theory for finite-dimensional C-algebras is presented. This formulation allows to deal with the classical and quantum case in a single, unifying mathematical framework. The derivation of the Cramer–Rao and Helstrom bounds for parametric statistical models with discrete and finite outcome spaces is presented.  相似文献   

4.
Sensor placement is an important factor that may significantly affect the localization performance of a sensor network. This paper investigates the sensor placement optimization problem in three-dimensional (3D) space for angle of arrival (AOA) target localization with Gaussian priors. We first show that under the A-optimality criterion, the optimization problem can be transferred to be a diagonalizing process on the AOA-based Fisher information matrix (FIM). Secondly, we prove that the FIM follows the invariance property of the 3D rotation, and the Gaussian covariance matrix of the FIM can be diagonalized via 3D rotation. Based on this finding, an optimal sensor placement method using 3D rotation was created for when prior information exists as to the target location. Finally, several simulations were carried out to demonstrate the effectiveness of the proposed method. Compared with the existing methods, the mean squared error (MSE) of the maximum a posteriori (MAP) estimation using the proposed method is lower by at least 25% when the number of sensors is between 3 and 6, while the estimation bias remains very close to zero (smaller than 0.15 m).  相似文献   

5.
This paper explores some applications of a two-moment inequality for the integral of the rth power of a function, where 0<r<1. The first contribution is an upper bound on the Rényi entropy of a random vector in terms of the two different moments. When one of the moments is the zeroth moment, these bounds recover previous results based on maximum entropy distributions under a single moment constraint. More generally, evaluation of the bound with two carefully chosen nonzero moments can lead to significant improvements with a modest increase in complexity. The second contribution is a method for upper bounding mutual information in terms of certain integrals with respect to the variance of the conditional density. The bounds have a number of useful properties arising from the connection with variance decompositions.  相似文献   

6.
We introduce here a new distribution called the power-modified Kies-exponential (PMKE) distribution and derive some of its mathematical properties. Its hazard function can be bathtub-shaped, increasing, or decreasing. Its parameters are estimated by seven classical methods. Further, Bayesian estimation, under square error, general entropy, and Linex loss functions are adopted to estimate the parameters. Simulation results are provided to investigate the behavior of these estimators. The estimation methods are sorted, based on partial and overall ranks, to determine the best estimation approach for the model parameters. The proposed distribution can be used to model a real-life turbocharger dataset, as compared with 24 extensions of the exponential distribution.  相似文献   

7.
We investigate quantum information by a theoretical measurement approach of an Aharanov–Bohm (AB) ring with Yukawa interaction in curved space with disclination. We obtained the so-called Shannon entropy through the eigenfunctions of the system. The quantum states considered come from Schrödinger theory with the AB field in the background of curved space. With this entropy, we can explore the quantum information at the position space and reciprocal space. Furthermore, we discussed how the magnetic field, the AB flux, and the topological defect influence the quantum states and the information entropy.  相似文献   

8.
From their conception to present times, different concepts and definitions of entropy take key roles in a variety of areas from thermodynamics to information science, and they can be applied to both classical and quantum systems. Among them is the Rényi entropy. It is able to characterize various properties of classical information with a unified concise form. We focus on the quantum counterpart, which unifies the von Neumann entropy, max- and min-entropy, collision entropy, etc. It can only be directly applied to Hermitian systems because it usually requires that the density matrices is normalized. For a non-Hermitian system, the evolved density matrix may not be normalized; i.e., the trace can be larger or less than one as the time evolution. However, it is not well-defined for the Rényi entropy with a non-normalized probability distribution relevant to the density matrix of a non-Hermitian system, especially when the trace of the non-normalized density matrix is larger than one. In this work, we investigate how to describe the Rényi entropy for non-Hermitian systems more appropriately. We obtain a concisely and generalized form of α-Rényi entropy, which we extend the unified order-α from finite positive real numbers to zero and infinity. Our generalized α-Rényi entropy can be directly calculated using both of the normalized and non-normalized density matrices so that it is able to describe non-Hermitian entropy dynamics. We illustrate the necessity of our generalization by showing the differences between ours and the conventional Rényi entropy for non-Hermitian detuning two-level systems.  相似文献   

9.
Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager’s E0 functions (with and without cost constraints); (2) large deviations form, in terms of conditional relative entropy and mutual information; (3) through the α-mutual information and the Augustin–Csiszár mutual information of order α derived from the Rényi divergence. While a fairly complete picture has emerged in the absence of cost constraints, there have remained gaps in the interrelationships between the three approaches in the general case of cost-constrained encoding. Furthermore, no systematic approach has been proposed to solve the attendant optimization problems by exploiting the specific structure of the information functions. This paper closes those gaps and proposes a simple method to maximize Augustin–Csiszár mutual information of order α under cost constraints by means of the maximization of the α-mutual information subject to an exponential average constraint.  相似文献   

10.
The measures of information transfer which correspond to non-additive entropies have intensively been studied in previous decades. The majority of the work includes the ones belonging to the Sharma–Mittal entropy class, such as the Rényi, the Tsallis, the Landsberg–Vedral and the Gaussian entropies. All of the considerations follow the same approach, mimicking some of the various and mutually equivalent definitions of Shannon information measures, and the information transfer is quantified by an appropriately defined measure of mutual information, while the maximal information transfer is considered as a generalized channel capacity. However, all of the previous approaches fail to satisfy at least one of the ineluctable properties which a measure of (maximal) information transfer should satisfy, leading to counterintuitive conclusions and predicting nonphysical behavior even in the case of very simple communication channels. This paper fills the gap by proposing two parameter measures named the α-q-mutual information and the α-q-capacity. In addition to standard Shannon approaches, special cases of these measures include the α-mutual information and the α-capacity, which are well established in the information theory literature as measures of additive Rényi information transfer, while the cases of the Tsallis, the Landsberg–Vedral and the Gaussian entropies can also be accessed by special choices of the parameters α and q. It is shown that, unlike the previous definition, the α-q-mutual information and the α-q-capacity satisfy the set of properties, which are stated as axioms, by which they reduce to zero in the case of totally destructive channels and to the (maximal) input Sharma–Mittal entropy in the case of perfect transmission, which is consistent with the maximum likelihood detection error. In addition, they are non-negative and less than or equal to the input and the output Sharma–Mittal entropies, in general. Thus, unlike the previous approaches, the proposed (maximal) information transfer measures do not manifest nonphysical behaviors such as sub-capacitance or super-capacitance, which could qualify them as appropriate measures of the Sharma–Mittal information transfer.  相似文献   

11.
The spreading of the stationary states of the multidimensional single-particle systems with a central potential is quantified by means of Heisenberg-like measures (radial and logarithmic expectation values) and entropy-like quantities (Fisher, Shannon, Rényi) of position and momentum probability densities. Since the potential is assumed to be analytically unknown, these dispersion and information-theoretical measures are given by means of inequality-type relations which are explicitly shown to depend on dimensionality and state’s angular hyperquantum numbers. The spherical-symmetry and spin effects on these spreading properties are obtained by use of various integral inequalities (Daubechies–Thakkar, Lieb–Thirring, Redheffer–Weyl, ...) and a variational approach based on the extremization of entropy-like measures. Emphasis is placed on the uncertainty relations, upon which the essential reason of the probabilistic theory of quantum systems relies.  相似文献   

12.
In this paper, we focus on extended informational measures based on a convex function ϕ: entropies, extended Fisher information, and generalized moments. Both the generalization of the Fisher information and the moments rely on the definition of an escort distribution linked to the (entropic) functional ϕ. We revisit the usual maximum entropy principle—more precisely its inverse problem, starting from the distribution and constraints, which leads to the introduction of state-dependent ϕ-entropies. Then, we examine interrelations between the extended informational measures and generalize relationships such the Cramér–Rao inequality and the de Bruijn identity in this broader context. In this particular framework, the maximum entropy distributions play a central role. Of course, all the results derived in the paper include the usual ones as special cases.  相似文献   

13.
We generalize the Jensen-Shannon divergence and the Jensen-Shannon diversity index by considering a variational definition with respect to a generic mean, thereby extending the notion of Sibson’s information radius. The variational definition applies to any arbitrary distance and yields a new way to define a Jensen-Shannon symmetrization of distances. When the variational optimization is further constrained to belong to prescribed families of probability measures, we get relative Jensen-Shannon divergences and their equivalent Jensen-Shannon symmetrizations of distances that generalize the concept of information projections. Finally, we touch upon applications of these variational Jensen-Shannon divergences and diversity indices to clustering and quantization tasks of probability measures, including statistical mixtures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号