首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
In this paper, we study the entropy functions on extreme rays of the polymatroidal region which contain a matroid, i.e., matroidal entropy functions. We introduce variable strength orthogonal arrays indexed by a connected matroid M and positive integer v which can be regarded as expanding the classic combinatorial structure orthogonal arrays. It is interesting that they are equivalent to the partition-representations of the matroid M with degree v and the (M,v) almost affine codes. Thus, a synergy among four fields, i.e., information theory, matroid theory, combinatorial design, and coding theory is developed, which may lead to potential applications in information problems such as network coding and secret-sharing. Leveraging the construction of variable strength orthogonal arrays, we characterize all matroidal entropy functions of order n5 with the exception of log10·U2,5 and logv·U3,5 for some v.  相似文献   

2.
3.
4.
5.
Path integral Monte Carlo and closure computations are utilized to study real space triplet correlations in the quantum hard-sphere system. The conditions cover from the normal fluid phase to the solid phases face-centered cubic (FCC) and cI16 (de Broglie wavelengths 0.2λB*<2, densities 0.1ρN*0.925). The focus is on the equilateral and isosceles features of the path-integral centroid and instantaneous structures. Complementary calculations of the associated pair structures are also carried out to strengthen structural identifications and facilitate closure evaluations. The three closures employed are Kirkwood superposition, Jackson–Feenberg convolution, and their average (AV3). A large quantity of new data are reported, and conclusions are drawn regarding (i) the remarkable performance of AV3 for the centroid and instantaneous correlations, (ii) the correspondences between the fluid and FCC salient features on the coexistence line, and (iii) the most conspicuous differences between FCC and cI16 at the pair and the triplet levels at moderately high densities (ρN*=0.9, 0.925). This research is expected to provide low-temperature insights useful for the future related studies of properties of real systems (e.g., helium, alkali metals, and general colloidal systems).  相似文献   

6.
We present computer simulation and theoretical results for a system of N Quantum Hard Spheres (QHS) particles of diameter σ and mass m at temperature T, confined between parallel hard walls separated by a distance Hσ, within the range 1H. Semiclassical Monte Carlo computer simulations were performed adapted to a confined space, considering effects in terms of the density of particles ρ*=N/V, where V is the accessible volume, the inverse length H1 and the de Broglie’s thermal wavelength λB=h/2πmkT, where k and h are the Boltzmann’s and Planck’s constants, respectively. For the case of extreme and maximum confinement, 0.5<H1<1 and H1=1, respectively, analytical results can be given based on an extension for quantum systems of the Helmholtz free energies for the corresponding classical systems.  相似文献   

7.
This paper investigates the achievable per-user degrees-of-freedom (DoF) in multi-cloud based sectored hexagonal cellular networks (M-CRAN) at uplink. The network consists of N base stations (BS) and KN base band unit pools (BBUP), which function as independent cloud centers. The communication between BSs and BBUPs occurs by means of finite-capacity fronthaul links of capacities CF=μF·12log(1+P) with P denoting transmit power. In the system model, BBUPs have limited processing capacity CBBU=μBBU·12log(1+P). We propose two different achievability schemes based on dividing the network into non-interfering parallelogram and hexagonal clusters, respectively. The minimum number of users in a cluster is determined by the ratio of BBUPs to BSs, r=K/N. Both of the parallelogram and hexagonal schemes are based on practically implementable beamforming and adapt the way of forming clusters to the sectorization of the cells. Proposed coding schemes improve the sum-rate over naive approaches that ignore cell sectorization, both at finite signal-to-noise ratio (SNR) and in the high-SNR limit. We derive a lower bound on per-user DoF which is a function of μBBU, μF, and r. We show that cut-set bound are attained for several cases, the achievability gap between lower and cut-set bounds decreases with the inverse of BBUP-BS ratio 1r for μF2M irrespective of μBBU, and that per-user DoF achieved through hexagonal clustering can not exceed the per-user DoF of parallelogram clustering for any value of μBBU and r as long as μF2M. Since the achievability gap decreases with inverse of the BBUP-BS ratio for small and moderate fronthaul capacities, the cut-set bound is almost achieved even for small cluster sizes for this range of fronthaul capacities. For higher fronthaul capacities, the achievability gap is not always tight but decreases with processing capacity. However, the cut-set bound, e.g., at 5M6, can be achieved with a moderate clustering size.  相似文献   

8.
The records of seismic noise in Japan for the period of 1997–2020, which includes the Tohoku seismic catastrophe on 11 March 2011, are considered. The following properties of noise are analyzed: The wavelet-based Donoho–Johnston index, the singularity spectrum support width, and the entropy of the wavelet coefficients. The question of whether precursors of strong earthquakes can be formulated on their basis is investigated. Attention is paid to the time interval after the Tohoku mega-earthquake to the trends in the mean properties of low-frequency seismic noise, which reflect the constant simplification of the statistical structure of seismic vibrations. Estimates of two-dimensional probability densities of extreme values are presented, which highlight the places in which extreme values of seismic noise properties are most often realized. The estimates of the probability densities of extreme values coincide with each other and have a maximum in the region: 30° N  Lat  34° N, 136° E  Lon 140° E. The main conclusions of the conducted studies are that the preparation of a strong earthquake is accompanied by a simplification of the structure of seismic noise. It is shown that bursts of coherence between the time series of the day length and the noise properties within annual time window precede bursts of released seismic energy. The value of the lag in the release of seismic energy relative to bursts of coherence is about 1.5 years, which can be used to declare a time interval of high seismic hazard after reaching the peak of coherence.  相似文献   

9.
Aims: Bubble entropy (bEn) is an entropy metric with a limited dependence on parameters. bEn does not directly quantify the conditional entropy of the series, but it assesses the change in entropy of the ordering of portions of its samples of length m, when adding an extra element. The analytical formulation of bEn for autoregressive (AR) processes shows that, for this class of processes, the relation between the first autocorrelation coefficient and bEn changes for odd and even values of m. While this is not an issue, per se, it triggered ideas for further investigation. Methods: Using theoretical considerations on the expected values for AR processes, we examined a two-steps-ahead estimator of bEn, which considered the cost of ordering two additional samples. We first compared it with the original bEn estimator on a simulated series. Then, we tested it on real heart rate variability (HRV) data. Results: The experiments showed that both examined alternatives showed comparable discriminating power. However, for values of 10<m<20, where the statistical significance of the method was increased and improved as m increased, the two-steps-ahead estimator presented slightly higher statistical significance and more regular behavior, even if the dependence on parameter m was still minimal. We also investigated a new normalization factor for bEn, which ensures that bEn =1 when white Gaussian noise (WGN) is given as the input. Conclusions: The research improved our understanding of bubble entropy, in particular in the context of HRV analysis, and we investigated interesting details regarding the definition of the estimator.  相似文献   

10.
The precise mechanisms connecting the cardiovascular system and the cerebrospinal fluid (CSF) are not well understood in detail. This paper investigates the couplings between the cardiac and respiratory components, as extracted from blood pressure (BP) signals and oscillations of the subarachnoid space width (SAS), collected during slow ventilation and ventilation against inspiration resistance. The experiment was performed on a group of 20 healthy volunteers (12 females and 8 males; BMI =22.1±3.2 kg/m2; age 25.3±7.9 years). We analysed the recorded signals with a wavelet transform. For the first time, a method based on dynamical Bayesian inference was used to detect the effective phase connectivity and the underlying coupling functions between the SAS and BP signals. There are several new findings. Slow breathing with or without resistance increases the strength of the coupling between the respiratory and cardiac components of both measured signals. We also observed increases in the strength of the coupling between the respiratory component of the BP and the cardiac component of the SAS and vice versa. Slow breathing synchronises the SAS oscillations, between the brain hemispheres. It also diminishes the similarity of the coupling between all analysed pairs of oscillators, while inspiratory resistance partially reverses this phenomenon. BP–SAS and SAS–BP interactions may reflect changes in the overall biomechanical characteristics of the brain.  相似文献   

11.
Recently, it has been shown that the information flow and causality between two time series can be inferred in a rigorous and quantitative sense, and, besides, the resulting causality can be normalized. A corollary that follows is, in the linear limit, causation implies correlation, while correlation does not imply causation. Now suppose there is an event A taking a harmonic form (sine/cosine), and it generates through some process another event B so that B always lags A by a phase of π/2. Here the causality is obviously seen, while by computation the correlation is, however, zero. This apparent contradiction is rooted in the fact that a harmonic system always leaves a single point on the Poincaré section; it does not add information. That is to say, though the absolute information flow from A to B is zero, i.e., TAB=0, the total information increase of B is also zero, so the normalized TAB, denoted as τAB, takes the form of 00. By slightly perturbing the system with some noise, solving a stochastic differential equation, and letting the perturbation go to zero, it can be shown that τAB approaches 100%, just as one would have expected.  相似文献   

12.
This paper explores some applications of a two-moment inequality for the integral of the rth power of a function, where 0<r<1. The first contribution is an upper bound on the Rényi entropy of a random vector in terms of the two different moments. When one of the moments is the zeroth moment, these bounds recover previous results based on maximum entropy distributions under a single moment constraint. More generally, evaluation of the bound with two carefully chosen nonzero moments can lead to significant improvements with a modest increase in complexity. The second contribution is a method for upper bounding mutual information in terms of certain integrals with respect to the variance of the conditional density. The bounds have a number of useful properties arising from the connection with variance decompositions.  相似文献   

13.
The modeling and prediction of chaotic time series require proper reconstruction of the state space from the available data in order to successfully estimate invariant properties of the embedded attractor. Thus, one must choose appropriate time delay τ and embedding dimension p for phase space reconstruction. The value of τ can be estimated from the Mutual Information, but this method is rather cumbersome computationally. Additionally, some researchers have recommended that τ should be chosen to be dependent on the embedding dimension p by means of an appropriate value for the time delay τw=(p1)τ, which is the optimal time delay for independence of the time series. The C-C method, based on Correlation Integral, is a method simpler than Mutual Information and has been proposed to select optimally τw and τ. In this paper, we suggest a simple method for estimating τ and τw based on symbolic analysis and symbolic entropy. As in the C-C method, τ is estimated as the first local optimal time delay and τw as the time delay for independence of the time series. The method is applied to several chaotic time series that are the base of comparison for several techniques. The numerical simulations for these systems verify that the proposed symbolic-based method is useful for practitioners and, according to the studied models, has a better performance than the C-C method for the choice of the time delay and embedding dimension. In addition, the method is applied to EEG data in order to study and compare some dynamic characteristics of brain activity under epileptic episodes  相似文献   

14.
This article deals with compression of binary sequences with a given number of ones, which can also be considered as a list of indexes of a given length. The first part of the article shows that the entropy H of random n-element binary sequences with exactly k elements equal one satisfies the inequalities klog2(0.48·n/k)<H<klog2(2.72·n/k). Based on this result, we propose a simple coding using fixed length words. Its main application is the compression of random binary sequences with a large disproportion between the number of zeros and the number of ones. Importantly, the proposed solution allows for a much faster decompression compared with the Golomb-Rice coding with a relatively small decrease in the efficiency of compression. The proposed algorithm can be particularly useful for database applications for which the speed of decompression is much more important than the degree of index list compression.  相似文献   

15.
Expected Shortfall (ES), the average loss above a high quantile, is the current financial regulatory market risk measure. Its estimation and optimization are highly unstable against sample fluctuations and become impossible above a critical ratio r=N/T, where N is the number of different assets in the portfolio, and T is the length of the available time series. The critical ratio depends on the confidence level α, which means we have a line of critical points on the αr plane. The large fluctuations in the estimation of ES can be attenuated by the application of regularizers. In this paper, we calculate ES analytically under an 1 regularizer by the method of replicas borrowed from the statistical physics of random systems. The ban on short selling, i.e., a constraint rendering all the portfolio weights non-negative, is a special case of an asymmetric 1 regularizer. Results are presented for the out-of-sample and the in-sample estimator of the regularized ES, the estimation error, the distribution of the optimal portfolio weights, and the density of the assets eliminated from the portfolio by the regularizer. It is shown that the no-short constraint acts as a high volatility cutoff, in the sense that it sets the weights of the high volatility elements to zero with higher probability than those of the low volatility items. This cutoff renormalizes the aspect ratio r=N/T, thereby extending the range of the feasibility of optimization. We find that there is a nontrivial mapping between the regularized and unregularized problems, corresponding to a renormalization of the order parameters.  相似文献   

16.
The q-exponential form eqx[1+(1q)x]1/(1q)(e1x=ex) is obtained by optimizing the nonadditive entropy Sqk1ipiqq1 (with S1=SBGkipilnpi, where BG stands for Boltzmann–Gibbs) under simple constraints, and emerges in wide classes of natural, artificial and social complex systems. However, in experiments, observations and numerical calculations, it rarely appears in its pure mathematical form. It appears instead exhibiting crossovers to, or mixed with, other similar forms. We first discuss departures from q-exponentials within crossover statistics, or by linearly combining them, or by linearly combining the corresponding q-entropies. Then, we discuss departures originated by double-index nonadditive entropies containing Sq as particular case.  相似文献   

17.
We investigate the possibility of phantom crossing in the dark energy sector and the solution for the Hubble tension between early and late universe observations. We use robust combinations of different cosmological observations, namely the Cosmic Microwave Background (CMB), local measurement of Hubble constant (H0), Baryon Acoustic Oscillation (BAO) and SnIa for this purpose. For a combination of CMB+BAO data that is related to early universe physics, phantom crossing in the dark energy sector was confirmed at a 95% confidence level and we obtained the constraint H0=71.03.8+2.9 km/s/Mpc at a 68% confidence level, which is in perfect agreement with the local measurement by Riess et al. We show that constraints from different combinations of data are consistent with each other and all of them are consistent with phantom crossing in the dark energy sector. For the combination of all data considered, we obtained the constraint H0=70.25±0.78 km/s/Mpc at a 68% confidence level and the phantom crossing happening at the scale factor am=0.8510.031+0.048 at a 68% confidence level.  相似文献   

18.
A solvable model of a periodically driven trapped mixture of Bose–Einstein condensates, consisting of N1 interacting bosons of mass m1 driven by a force of amplitude fL,1 and N2 interacting bosons of mass m2 driven by a force of amplitude fL,2, is presented. The model generalizes the harmonic-interaction model for mixtures to the time-dependent domain. The resulting many-particle ground Floquet wavefunction and quasienergy, as well as the time-dependent densities and reduced density matrices, are prescribed explicitly and analyzed at the many-body and mean-field levels of theory for finite systems and at the limit of an infinite number of particles. We prove that the time-dependent densities per particle are given at the limit of an infinite number of particles by their respective mean-field quantities, and that the time-dependent reduced one-particle and two-particle density matrices per particle of the driven mixture are 100% condensed. Interestingly, the quasienergy per particle does not coincide with the mean-field value at this limit, unless the relative center-of-mass coordinate of the two Bose–Einstein condensates is not activated by the driving forces fL,1 and fL,2. As an application, we investigate the imprinting of angular momentum and its fluctuations when steering a Bose–Einstein condensate by an interacting bosonic impurity and the resulting modes of rotations. Whereas the expectation values per particle of the angular-momentum operator for the many-body and mean-field solutions coincide at the limit of an infinite number of particles, the respective fluctuations can differ substantially. The results are analyzed in terms of the transformation properties of the angular-momentum operator under translations and boosts, and as a function of the interactions between the particles. Implications are briefly discussed.  相似文献   

19.
In the nervous system, information is conveyed by sequence of action potentials, called spikes-trains. As MacKay and McCulloch suggested, spike-trains can be represented as bits sequences coming from Information Sources (IS). Previously, we studied relations between spikes’ Information Transmission Rates (ITR) and their correlations, and frequencies. Now, I concentrate on the problem of how spikes fluctuations affect ITR. The IS are typically modeled as stationary stochastic processes, which I consider here as two-state Markov processes. As a spike-trains’ fluctuation measure, I assume the standard deviation σ, which measures the average fluctuation of spikes around the average spike frequency. I found that the character of ITR and signal fluctuations relation strongly depends on the parameter s being a sum of transitions probabilities from a no spike state to spike state. The estimate of the Information Transmission Rate was found by expressions depending on the values of signal fluctuations and parameter s. It turned out that for smaller s<1, the quotient ITRσ has a maximum and can tend to zero depending on transition probabilities, while for 1<s, the ITRσ is separated from 0. Additionally, it was also shown that ITR quotient by variance behaves in a completely different way. Similar behavior was observed when classical Shannon entropy terms in the Markov entropy formula are replaced by their approximation with polynomials. My results suggest that in a noisier environment (1<s), to get appropriate reliability and efficiency of transmission, IS with higher tendency of transition from the no spike to spike state should be applied. Such selection of appropriate parameters plays an important role in designing learning mechanisms to obtain networks with higher performance.  相似文献   

20.
The effects of using a partly curved porous layer on the thermal management and entropy generation features are studied in a ventilated cavity filled with hybrid nanofluid under the effects of inclined magnetic field by using finite volume method. This study is performed for the range of pertinent parameters of Reynolds number (100Re1000), magnetic field strength (0Ha80), permeability of porous region (104Da5×102), porous layer height (0.15Htp0.45H), porous layer position (0.25Hyp0.45H), and curvature size (0b0.3H). The magnetic field reduces the vortex size, while the average Nusselt number of hot walls increases for Ha number above 20 and highest enhancement is 47% for left vertical wall. The variation in the average Nu with permeability of the layer is about 12.5% and 21% for left and right vertical walls, respectively, while these amounts are 12.5% and 32.5% when the location of the porous layer changes. The entropy generation increases with Hartmann number above 20, while there is 22% increase in the entropy generation for the case at the highest magnetic field. The porous layer height reduced the entropy generation for domain above it and it give the highest contribution to the overall entropy generation. When location of the curved porous layer is varied, the highest variation of entropy generation is attained for the domain below it while the lowest value is obtained at yp=0.3H. When the size of elliptic curvature is varied, the overall entropy generation decreases from b = 0 to b=0.2H by about 10% and then increases by 5% from b=0.2H to b=0.3H.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号