首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we study the entropy functions on extreme rays of the polymatroidal region which contain a matroid, i.e., matroidal entropy functions. We introduce variable strength orthogonal arrays indexed by a connected matroid M and positive integer v which can be regarded as expanding the classic combinatorial structure orthogonal arrays. It is interesting that they are equivalent to the partition-representations of the matroid M with degree v and the (M,v) almost affine codes. Thus, a synergy among four fields, i.e., information theory, matroid theory, combinatorial design, and coding theory is developed, which may lead to potential applications in information problems such as network coding and secret-sharing. Leveraging the construction of variable strength orthogonal arrays, we characterize all matroidal entropy functions of order n5 with the exception of log10·U2,5 and logv·U3,5 for some v.  相似文献   

2.
When studying the behaviour of complex dynamical systems, a statistical formulation can provide useful insights. In particular, information geometry is a promising tool for this purpose. In this paper, we investigate the information length for n-dimensional linear autonomous stochastic processes, providing a basic theoretical framework that can be applied to a large set of problems in engineering and physics. A specific application is made to a harmonically bound particle system with the natural oscillation frequency ω, subject to a damping γ and a Gaussian white-noise. We explore how the information length depends on ω and γ, elucidating the role of critical damping γ=2ω in information geometry. Furthermore, in the long time limit, we show that the information length reflects the linear geometry associated with the Gaussian statistics in a linear stochastic process.  相似文献   

3.
Simple SummaryIn the early Universe, both QCD and EW eras play an essential role in laying seeds for nucleosynthesis and even dictating the cosmological large-scale structure. Taking advantage of recent developments in ultrarelativistic nuclear experiments and nonperturbative and perturbative lattice simulations, various thermodynamic quantities including pressure, energy density, bulk viscosity, relaxation time, and temperature have been calculated up to the TeV-scale, in which the possible influence of finite bulk viscosity is characterized for the first time and the analytical dependence of Hubble parameter on the scale factor is also introduced.AbstractBased on recent perturbative and non-perturbative lattice calculations with almost quark flavors and the thermal contributions from photons, neutrinos, leptons, electroweak particles, and scalar Higgs bosons, various thermodynamic quantities, at vanishing net-baryon densities, such as pressure, energy density, bulk viscosity, relaxation time, and temperature have been calculated up to the TeV-scale, i.e., covering hadron, QGP, and electroweak (EW) phases in the early Universe. This remarkable progress motivated the present study to determine the possible influence of the bulk viscosity in the early Universe and to understand how this would vary from epoch to epoch. We have taken into consideration first- (Eckart) and second-order (Israel–Stewart) theories for the relativistic cosmic fluid and integrated viscous equations of state in Friedmann equations. Nonlinear nonhomogeneous differential equations are obtained as analytical solutions. For Israel–Stewart, the differential equations are very sophisticated to be solved. They are outlined here as road-maps for future studies. For Eckart theory, the only possible solution is the functionality, H(a(t)), where H(t) is the Hubble parameter and a(t) is the scale factor, but none of them so far could to be directly expressed in terms of either proper or cosmic time t. For Eckart-type viscous background, especially at finite cosmological constant, non-singular H(t) and a(t) are obtained, where H(t) diverges for QCD/EW and asymptotic EoS. For non-viscous background, the dependence of H(a(t)) is monotonic. The same conclusion can be drawn for an ideal EoS. We also conclude that the rate of decreasing H(a(t)) with increasing a(t) varies from epoch to epoch, at vanishing and finite cosmological constant. These results obviously help in improving our understanding of the nucleosynthesis and the cosmological large-scale structure.  相似文献   

4.
Using finite time thermodynamic theory, an irreversible steady-flow Lenoir cycle model is established, and expressions of power output and thermal efficiency for the model are derived. Through numerical calculations, with the different fixed total heat conductances (UT) of two heat exchangers, the maximum powers (Pmax), the maximum thermal efficiencies (ηmax), and the corresponding optimal heat conductance distribution ratios (uLP(opt)) and (uLη(opt)) are obtained. The effects of the internal irreversibility are analyzed. The results show that, when the heat conductances of the hot- and cold-side heat exchangers are constants, the corresponding power output and thermal efficiency are constant values. When the heat source temperature ratio (τ) and the effectivenesses of the heat exchangers increase, the corresponding power output and thermal efficiency increase. When the heat conductance distributions are the optimal values, the characteristic relationships of P-uL and η-uL are parabolic-like ones. When UT is given, with the increase in τ, the Pmax, ηmax, uLP(opt), and uLη(opt) increase. When τ is given, with the increase in UT, Pmax and ηmax increase, while uLP(opt) and uLη(opt) decrease.  相似文献   

5.
The modeling and prediction of chaotic time series require proper reconstruction of the state space from the available data in order to successfully estimate invariant properties of the embedded attractor. Thus, one must choose appropriate time delay τ and embedding dimension p for phase space reconstruction. The value of τ can be estimated from the Mutual Information, but this method is rather cumbersome computationally. Additionally, some researchers have recommended that τ should be chosen to be dependent on the embedding dimension p by means of an appropriate value for the time delay τw=(p1)τ, which is the optimal time delay for independence of the time series. The C-C method, based on Correlation Integral, is a method simpler than Mutual Information and has been proposed to select optimally τw and τ. In this paper, we suggest a simple method for estimating τ and τw based on symbolic analysis and symbolic entropy. As in the C-C method, τ is estimated as the first local optimal time delay and τw as the time delay for independence of the time series. The method is applied to several chaotic time series that are the base of comparison for several techniques. The numerical simulations for these systems verify that the proposed symbolic-based method is useful for practitioners and, according to the studied models, has a better performance than the C-C method for the choice of the time delay and embedding dimension. In addition, the method is applied to EEG data in order to study and compare some dynamic characteristics of brain activity under epileptic episodes  相似文献   

6.
A new type of quantum correction to the structure of classical black holes is investigated. This concerns the physics of event horizons induced by the occurrence of stochastic quantum gravitational fields. The theoretical framework is provided by the theory of manifestly covariant quantum gravity and the related prediction of an exclusively quantum-produced stochastic cosmological constant. The specific example case of the Schwarzschild–deSitter geometry is looked at, analyzing the consequent stochastic modifications of the Einstein field equations. It is proved that, in such a setting, the black hole event horizon no longer identifies a classical (i.e., deterministic) two-dimensional surface. On the contrary, it acquires a quantum stochastic character, giving rise to a frame-dependent transition region of radial width δr between internal and external subdomains. It is found that: (a) the radial size of the stochastic region depends parametrically on the central mass M of the black hole, scaling as δrM3; (b) for supermassive black holes δr is typically orders of magnitude larger than the Planck length lP. Instead, for typical stellar-mass black holes, δr may drop well below lP. The outcome provides new insight into the quantum properties of black holes, with implications for the physics of quantum tunneling phenomena expected to arise across stochastic event horizons.  相似文献   

7.
To sample from complex, high-dimensional distributions, one may choose algorithms based on the Hybrid Monte Carlo (HMC) method. HMC-based algorithms generate nonlocal moves alleviating diffusive behavior. Here, I build on an already defined HMC framework, hybrid Monte Carlo on Hilbert spaces (Beskos, et al. Stoch. Proc. Applic. 2011), that provides finite-dimensional approximations of measures π, which have density with respect to a Gaussian measure on an infinite-dimensional Hilbert (path) space. In all HMC algorithms, one has some freedom to choose the mass operator. The novel feature of the algorithm described in this article lies in the choice of this operator. This new choice defines a Markov Chain Monte Carlo (MCMC) method that is well defined on the Hilbert space itself. As before, the algorithm described herein uses an enlarged phase space Π having the target π as a marginal, together with a Hamiltonian flow that preserves Π. In the previous work, the authors explored a method where the phase space π was augmented with Brownian bridges. With this new choice, π is augmented by Ornstein–Uhlenbeck (OU) bridges. The covariance of Brownian bridges grows with its length, which has negative effects on the acceptance rate in the MCMC method. This contrasts with the covariance of OU bridges, which is independent of the path length. The ingredients of the new algorithm include the definition of the mass operator, the equations for the Hamiltonian flow, the (approximate) numerical integration of the evolution equations, and finally, the Metropolis–Hastings acceptance rule. Taken together, these constitute a robust method for sampling the target distribution in an almost dimension-free manner. The behavior of this novel algorithm is demonstrated by computer experiments for a particle moving in two dimensions, between two free-energy basins separated by an entropic barrier.  相似文献   

8.
The stability of endoreversible heat engines has been extensively studied in the literature. In this paper, an alternative dynamic equations system was obtained by using restitution forces that bring the system back to the stationary state. The departing point is the assumption that the system has a stationary fixed point, along with a Taylor expansion in the first order of the input/output heat fluxes, without further specifications regarding the properties of the working fluid or the heat device specifications. Specific cases of the Newton and the phenomenological heat transfer laws in a Carnot-like heat engine model were analyzed. It was shown that the evolution of the trajectories toward the stationary state have relevant consequences on the performance of the system. A major role was played by the symmetries/asymmetries of the conductance ratio σhc of the heat transfer law associated with the input/output heat exchanges. Accordingly, three main behaviors were observed: (1) For small σhc values, the thermodynamic trajectories evolved near the endoreversible limit, improving the efficiency and power output values with a decrease in entropy generation; (2) for large σhc values, the thermodynamic trajectories evolved either near the Pareto front or near the endoreversible limit, and in both cases, they improved the efficiency and power values with a decrease in entropy generation; (3) for the symmetric case (σhc=1), the trajectories evolved either with increasing entropy generation tending toward the Pareto front or with a decrease in entropy generation tending toward the endoreversible limit. Moreover, it was shown that the total entropy generation can define a time scale for both the operation cycle time and the relaxation characteristic time.  相似文献   

9.
Probability is an important question in the ontological interpretation of quantum mechanics. It has been discussed in some trajectory interpretations such as Bohmian mechanics and stochastic mechanics. New questions arise when the probability domain extends to the complex space, including the generation of complex trajectory, the definition of the complex probability, and the relation of the complex probability to the quantum probability. The complex treatment proposed in this article applies the optimal quantum guidance law to derive the stochastic differential equation governing a particle’s random motion in the complex plane. The probability distribution ρc(t,x,y) of the particle’s position over the complex plane z=x+iy is formed by an ensemble of the complex quantum random trajectories, which are solved from the complex stochastic differential equation. Meanwhile, the probability distribution ρc(t,x,y) is verified by the solution of the complex Fokker–Planck equation. It is shown that quantum probability |Ψ|2 and classical probability can be integrated under the framework of complex probability ρc(t,x,y), such that they can both be derived from ρc(t,x,y) by different statistical ways of collecting spatial points.  相似文献   

10.
11.
Expected Shortfall (ES), the average loss above a high quantile, is the current financial regulatory market risk measure. Its estimation and optimization are highly unstable against sample fluctuations and become impossible above a critical ratio r=N/T, where N is the number of different assets in the portfolio, and T is the length of the available time series. The critical ratio depends on the confidence level α, which means we have a line of critical points on the αr plane. The large fluctuations in the estimation of ES can be attenuated by the application of regularizers. In this paper, we calculate ES analytically under an 1 regularizer by the method of replicas borrowed from the statistical physics of random systems. The ban on short selling, i.e., a constraint rendering all the portfolio weights non-negative, is a special case of an asymmetric 1 regularizer. Results are presented for the out-of-sample and the in-sample estimator of the regularized ES, the estimation error, the distribution of the optimal portfolio weights, and the density of the assets eliminated from the portfolio by the regularizer. It is shown that the no-short constraint acts as a high volatility cutoff, in the sense that it sets the weights of the high volatility elements to zero with higher probability than those of the low volatility items. This cutoff renormalizes the aspect ratio r=N/T, thereby extending the range of the feasibility of optimization. We find that there is a nontrivial mapping between the regularized and unregularized problems, corresponding to a renormalization of the order parameters.  相似文献   

12.
We discuss a covariant relativistic Boltzmann equation which describes the evolution of a system of particles in spacetime evolving with a universal invariant parameter τ. The observed time t of Einstein and Maxwell, in the presence of interaction, is not necessarily a monotonic function of τ. If t(τ) increases with τ, the worldline may be associated with a normal particle, but if it is decreasing in τ, it is observed in the laboratory as an antiparticle. This paper discusses the implications for entropy evolution in this relativistic framework. It is shown that if an ensemble of particles and antiparticles, converge in a region of pair annihilation, the entropy of the antiparticle beam may decreaase in time.  相似文献   

13.
The decomposition effect of variational mode decomposition (VMD) mainly depends on the choice of decomposition number K and penalty factor α. For the selection of two parameters, the empirical method and single objective optimization method are usually used, but the aforementioned methods often have limitations and cannot achieve the optimal effects. Therefore, a multi-objective multi-island genetic algorithm (MIGA) is proposed to optimize the parameters of VMD and apply it to feature extraction of bearing fault. First, the envelope entropy (Ee) can reflect the sparsity of the signal, and Renyi entropy (Re) can reflect the energy aggregation degree of the time-frequency distribution of the signal. Therefore, Ee and Re are selected as fitness functions, and the optimal solution of VMD parameters is obtained by the MIGA algorithm. Second, the improved VMD algorithm is used to decompose the bearing fault signal, and then two intrinsic mode functions (IMF) with the most fault information are selected by improved kurtosis and Holder coefficient for reconstruction. Finally, the envelope spectrum of the reconstructed signal is analyzed. The analysis of comparative experiments shows that the feature extraction method can extract bearing fault features more accurately, and the fault diagnosis model based on this method has higher accuracy.  相似文献   

14.
The present study aimed to develop and investigate the local discontinuous Galerkin method for the numerical solution of the fractional logistic differential equation, occurring in many biological and social science phenomena. The fractional derivative is described in the sense of Liouville-Caputo. Using the upwind numerical fluxes, the numerical stability of the method is proved in the L norm. With the aid of the shifted Legendre polynomials, the weak form is reduced into a system of the algebraic equations to be solved in each subinterval. Furthermore, to handle the nonlinear term, the technique of product approximation is utilized. The utility of the present discretization technique and some well-known standard schemes is checked through numerical calculations on a range of linear and nonlinear problems with analytical solutions.  相似文献   

15.
One of the biggest challenges in characterizing 2-D image topographies is finding a low-dimensional parameter set that can succinctly describe, not so much image patterns themselves, but the nature of these patterns. The 2-D cluster variation method (CVM), introduced by Kikuchi in 1951, can characterize very local image pattern distributions using configuration variables, identifying nearest-neighbor, next-nearest-neighbor, and triplet configurations. Using the 2-D CVM, we can characterize 2-D topographies using just two parameters; the activation enthalpy (ε0) and the interaction enthalpy (ε1). Two different initial topographies (“scale-free-like” and “extreme rich club-like”) were each computationally brought to a CVM free energy minimum, for the case where the activation enthalpy was zero and different values were used for the interaction enthalpy. The results are: (1) the computational configuration variable results differ significantly from the analytically-predicted values well before ε1 approaches the known divergence as ε10.881, (2) the range of potentially useful parameter values, favoring clustering of like-with-like units, is limited to the region where ε0<3 and ε1<0.25, and (3) the topographies in the systems that are brought to a free energy minimum show interesting visual features, such as extended “spider legs” connecting previously unconnected “islands,” and as well as evolution of “peninsulas” in what were previously solid masses.  相似文献   

16.
Through the research presented herein, it is quite clear that there are two thermodynamically distinct types (A and B) of energetic processes naturally occurring on Earth. Type A, such as glycolysis and the tricarboxylic acid cycle, apparently follows the second law well; Type B, as exemplified by the thermotrophic function with transmembrane electrostatically localized protons presented here, does not necessarily have to be constrained by the second law, owing to its special asymmetric function. This study now, for the first time, numerically shows that transmembrane electrostatic proton localization (Type-B process) represents a negative entropy event with a local protonic entropy change (ΔSL) in a range from −95 to −110 J/K∙mol. This explains the relationship between both the local protonic entropy change (ΔSL) and the mitochondrial environmental temperature (T) and the local protonic Gibbs free energy (ΔGL=TΔSL) in isothermal environmental heat utilization. The energy efficiency for the utilization of total protonic Gibbs free energy (ΔGT including ΔGL=TΔSL) in driving the synthesis of ATP is estimated to be about 60%, indicating that a significant fraction of the environmental heat energy associated with the thermal motion kinetic energy (kBT) of transmembrane electrostatically localized protons is locked into the chemical form of energy in ATP molecules. Fundamentally, it is the combination of water as a protonic conductor, and thus the formation of protonic membrane capacitor, with asymmetric structures of mitochondrial membrane and cristae that makes this amazing thermotrophic feature possible. The discovery of energy Type-B processes has inspired an invention (WO 2019/136037 A1) for energy renewal through isothermal environmental heat energy utilization with an asymmetric electron-gated function to generate electricity, which has the potential to power electronic devices forever, including mobile phones and laptops. This invention, as an innovative Type-B mimic, may have many possible industrial applications and is likely to be transformative in energy science and technologies for sustainability on Earth.  相似文献   

17.
The asymmetric skew divergence smooths one of the distributions by mixing it, to a degree determined by the parameter λ, with the other distribution. Such divergence is an approximation of the KL divergence that does not require the target distribution to be absolutely continuous with respect to the source distribution. In this paper, an information geometric generalization of the skew divergence called the α-geodesical skew divergence is proposed, and its properties are studied.  相似文献   

18.
19.
Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager’s E0 functions (with and without cost constraints); (2) large deviations form, in terms of conditional relative entropy and mutual information; (3) through the α-mutual information and the Augustin–Csiszár mutual information of order α derived from the Rényi divergence. While a fairly complete picture has emerged in the absence of cost constraints, there have remained gaps in the interrelationships between the three approaches in the general case of cost-constrained encoding. Furthermore, no systematic approach has been proposed to solve the attendant optimization problems by exploiting the specific structure of the information functions. This paper closes those gaps and proposes a simple method to maximize Augustin–Csiszár mutual information of order α under cost constraints by means of the maximization of the α-mutual information subject to an exponential average constraint.  相似文献   

20.
The effects of using a partly curved porous layer on the thermal management and entropy generation features are studied in a ventilated cavity filled with hybrid nanofluid under the effects of inclined magnetic field by using finite volume method. This study is performed for the range of pertinent parameters of Reynolds number (100Re1000), magnetic field strength (0Ha80), permeability of porous region (104Da5×102), porous layer height (0.15Htp0.45H), porous layer position (0.25Hyp0.45H), and curvature size (0b0.3H). The magnetic field reduces the vortex size, while the average Nusselt number of hot walls increases for Ha number above 20 and highest enhancement is 47% for left vertical wall. The variation in the average Nu with permeability of the layer is about 12.5% and 21% for left and right vertical walls, respectively, while these amounts are 12.5% and 32.5% when the location of the porous layer changes. The entropy generation increases with Hartmann number above 20, while there is 22% increase in the entropy generation for the case at the highest magnetic field. The porous layer height reduced the entropy generation for domain above it and it give the highest contribution to the overall entropy generation. When location of the curved porous layer is varied, the highest variation of entropy generation is attained for the domain below it while the lowest value is obtained at yp=0.3H. When the size of elliptic curvature is varied, the overall entropy generation decreases from b = 0 to b=0.2H by about 10% and then increases by 5% from b=0.2H to b=0.3H.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号