首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper investigates the achievable per-user degrees-of-freedom (DoF) in multi-cloud based sectored hexagonal cellular networks (M-CRAN) at uplink. The network consists of N base stations (BS) and KN base band unit pools (BBUP), which function as independent cloud centers. The communication between BSs and BBUPs occurs by means of finite-capacity fronthaul links of capacities CF=μF·12log(1+P) with P denoting transmit power. In the system model, BBUPs have limited processing capacity CBBU=μBBU·12log(1+P). We propose two different achievability schemes based on dividing the network into non-interfering parallelogram and hexagonal clusters, respectively. The minimum number of users in a cluster is determined by the ratio of BBUPs to BSs, r=K/N. Both of the parallelogram and hexagonal schemes are based on practically implementable beamforming and adapt the way of forming clusters to the sectorization of the cells. Proposed coding schemes improve the sum-rate over naive approaches that ignore cell sectorization, both at finite signal-to-noise ratio (SNR) and in the high-SNR limit. We derive a lower bound on per-user DoF which is a function of μBBU, μF, and r. We show that cut-set bound are attained for several cases, the achievability gap between lower and cut-set bounds decreases with the inverse of BBUP-BS ratio 1r for μF2M irrespective of μBBU, and that per-user DoF achieved through hexagonal clustering can not exceed the per-user DoF of parallelogram clustering for any value of μBBU and r as long as μF2M. Since the achievability gap decreases with inverse of the BBUP-BS ratio for small and moderate fronthaul capacities, the cut-set bound is almost achieved even for small cluster sizes for this range of fronthaul capacities. For higher fronthaul capacities, the achievability gap is not always tight but decreases with processing capacity. However, the cut-set bound, e.g., at 5M6, can be achieved with a moderate clustering size.  相似文献   

2.
We study a two state “jumping diffusivity” model for a Brownian process alternating between two different diffusion constants, D+>D, with random waiting times in both states whose distribution is rather general. In the limit of long measurement times, Gaussian behavior with an effective diffusion coefficient is recovered. We show that, for equilibrium initial conditions and when the limit of the diffusion coefficient D0 is taken, the short time behavior leads to a cusp, namely a non-analytical behavior, in the distribution of the displacements P(x,t) for x0. Visually this cusp, or tent-like shape, resembles similar behavior found in many experiments of diffusing particles in disordered environments, such as glassy systems and intracellular media. This general result depends only on the existence of finite mean values of the waiting times at the different states of the model. Gaussian statistics in the long time limit is achieved due to ergodicity and convergence of the distribution of the temporal occupation fraction in state D+ to a δ-function. The short time behavior of the same quantity converges to a uniform distribution, which leads to the non-analyticity in P(x,t). We demonstrate how super-statistical framework is a zeroth order short time expansion of P(x,t), in the number of transitions, that does not yield the cusp like shape. The latter, considered as the key feature of experiments in the field, is found with the first correction in perturbation theory.  相似文献   

3.
Using finite time thermodynamic theory, an irreversible steady-flow Lenoir cycle model is established, and expressions of power output and thermal efficiency for the model are derived. Through numerical calculations, with the different fixed total heat conductances (UT) of two heat exchangers, the maximum powers (Pmax), the maximum thermal efficiencies (ηmax), and the corresponding optimal heat conductance distribution ratios (uLP(opt)) and (uLη(opt)) are obtained. The effects of the internal irreversibility are analyzed. The results show that, when the heat conductances of the hot- and cold-side heat exchangers are constants, the corresponding power output and thermal efficiency are constant values. When the heat source temperature ratio (τ) and the effectivenesses of the heat exchangers increase, the corresponding power output and thermal efficiency increase. When the heat conductance distributions are the optimal values, the characteristic relationships of P-uL and η-uL are parabolic-like ones. When UT is given, with the increase in τ, the Pmax, ηmax, uLP(opt), and uLη(opt) increase. When τ is given, with the increase in UT, Pmax and ηmax increase, while uLP(opt) and uLη(opt) decrease.  相似文献   

4.
One of the biggest challenges in characterizing 2-D image topographies is finding a low-dimensional parameter set that can succinctly describe, not so much image patterns themselves, but the nature of these patterns. The 2-D cluster variation method (CVM), introduced by Kikuchi in 1951, can characterize very local image pattern distributions using configuration variables, identifying nearest-neighbor, next-nearest-neighbor, and triplet configurations. Using the 2-D CVM, we can characterize 2-D topographies using just two parameters; the activation enthalpy (ε0) and the interaction enthalpy (ε1). Two different initial topographies (“scale-free-like” and “extreme rich club-like”) were each computationally brought to a CVM free energy minimum, for the case where the activation enthalpy was zero and different values were used for the interaction enthalpy. The results are: (1) the computational configuration variable results differ significantly from the analytically-predicted values well before ε1 approaches the known divergence as ε10.881, (2) the range of potentially useful parameter values, favoring clustering of like-with-like units, is limited to the region where ε0<3 and ε1<0.25, and (3) the topographies in the systems that are brought to a free energy minimum show interesting visual features, such as extended “spider legs” connecting previously unconnected “islands,” and as well as evolution of “peninsulas” in what were previously solid masses.  相似文献   

5.
Recently, it has been shown that the information flow and causality between two time series can be inferred in a rigorous and quantitative sense, and, besides, the resulting causality can be normalized. A corollary that follows is, in the linear limit, causation implies correlation, while correlation does not imply causation. Now suppose there is an event A taking a harmonic form (sine/cosine), and it generates through some process another event B so that B always lags A by a phase of π/2. Here the causality is obviously seen, while by computation the correlation is, however, zero. This apparent contradiction is rooted in the fact that a harmonic system always leaves a single point on the Poincaré section; it does not add information. That is to say, though the absolute information flow from A to B is zero, i.e., TAB=0, the total information increase of B is also zero, so the normalized TAB, denoted as τAB, takes the form of 00. By slightly perturbing the system with some noise, solving a stochastic differential equation, and letting the perturbation go to zero, it can be shown that τAB approaches 100%, just as one would have expected.  相似文献   

6.
Transverse momentum spectra of π+, p, Λ, Ξ or Ξ¯+, Ω or Ω¯+ and deuteron (d) in different centrality intervals in nucleus–nucleus collisions at the center of mass energy are analyzed by the blast wave model with Boltzmann Gibbs statistics. We extracted the kinetic freezeout temperature, transverse flow velocity and kinetic freezeout volume from the transverse momentum spectra of the particles. It is observed that the non-strange and strange (multi-strange) particles freezeout separately due to different reaction cross-sections. While the freezeout volume and transverse flow velocity are mass dependent, they decrease with the resting mass of the particles. The present work reveals the scenario of a double kinetic freezeout in nucleus–nucleus collisions. Furthermore, the kinetic freezeout temperature and freezeout volume are larger in central collisions than peripheral collisions. However, the transverse flow velocity remains almost unchanged from central to peripheral collisions.  相似文献   

7.
A solvable model of a periodically driven trapped mixture of Bose–Einstein condensates, consisting of N1 interacting bosons of mass m1 driven by a force of amplitude fL,1 and N2 interacting bosons of mass m2 driven by a force of amplitude fL,2, is presented. The model generalizes the harmonic-interaction model for mixtures to the time-dependent domain. The resulting many-particle ground Floquet wavefunction and quasienergy, as well as the time-dependent densities and reduced density matrices, are prescribed explicitly and analyzed at the many-body and mean-field levels of theory for finite systems and at the limit of an infinite number of particles. We prove that the time-dependent densities per particle are given at the limit of an infinite number of particles by their respective mean-field quantities, and that the time-dependent reduced one-particle and two-particle density matrices per particle of the driven mixture are 100% condensed. Interestingly, the quasienergy per particle does not coincide with the mean-field value at this limit, unless the relative center-of-mass coordinate of the two Bose–Einstein condensates is not activated by the driving forces fL,1 and fL,2. As an application, we investigate the imprinting of angular momentum and its fluctuations when steering a Bose–Einstein condensate by an interacting bosonic impurity and the resulting modes of rotations. Whereas the expectation values per particle of the angular-momentum operator for the many-body and mean-field solutions coincide at the limit of an infinite number of particles, the respective fluctuations can differ substantially. The results are analyzed in terms of the transformation properties of the angular-momentum operator under translations and boosts, and as a function of the interactions between the particles. Implications are briefly discussed.  相似文献   

8.
9.
The effects of using a partly curved porous layer on the thermal management and entropy generation features are studied in a ventilated cavity filled with hybrid nanofluid under the effects of inclined magnetic field by using finite volume method. This study is performed for the range of pertinent parameters of Reynolds number (100Re1000), magnetic field strength (0Ha80), permeability of porous region (104Da5×102), porous layer height (0.15Htp0.45H), porous layer position (0.25Hyp0.45H), and curvature size (0b0.3H). The magnetic field reduces the vortex size, while the average Nusselt number of hot walls increases for Ha number above 20 and highest enhancement is 47% for left vertical wall. The variation in the average Nu with permeability of the layer is about 12.5% and 21% for left and right vertical walls, respectively, while these amounts are 12.5% and 32.5% when the location of the porous layer changes. The entropy generation increases with Hartmann number above 20, while there is 22% increase in the entropy generation for the case at the highest magnetic field. The porous layer height reduced the entropy generation for domain above it and it give the highest contribution to the overall entropy generation. When location of the curved porous layer is varied, the highest variation of entropy generation is attained for the domain below it while the lowest value is obtained at yp=0.3H. When the size of elliptic curvature is varied, the overall entropy generation decreases from b = 0 to b=0.2H by about 10% and then increases by 5% from b=0.2H to b=0.3H.  相似文献   

10.
The modeling and prediction of chaotic time series require proper reconstruction of the state space from the available data in order to successfully estimate invariant properties of the embedded attractor. Thus, one must choose appropriate time delay τ and embedding dimension p for phase space reconstruction. The value of τ can be estimated from the Mutual Information, but this method is rather cumbersome computationally. Additionally, some researchers have recommended that τ should be chosen to be dependent on the embedding dimension p by means of an appropriate value for the time delay τw=(p1)τ, which is the optimal time delay for independence of the time series. The C-C method, based on Correlation Integral, is a method simpler than Mutual Information and has been proposed to select optimally τw and τ. In this paper, we suggest a simple method for estimating τ and τw based on symbolic analysis and symbolic entropy. As in the C-C method, τ is estimated as the first local optimal time delay and τw as the time delay for independence of the time series. The method is applied to several chaotic time series that are the base of comparison for several techniques. The numerical simulations for these systems verify that the proposed symbolic-based method is useful for practitioners and, according to the studied models, has a better performance than the C-C method for the choice of the time delay and embedding dimension. In addition, the method is applied to EEG data in order to study and compare some dynamic characteristics of brain activity under epileptic episodes  相似文献   

11.
We present computer simulation and theoretical results for a system of N Quantum Hard Spheres (QHS) particles of diameter σ and mass m at temperature T, confined between parallel hard walls separated by a distance Hσ, within the range 1H. Semiclassical Monte Carlo computer simulations were performed adapted to a confined space, considering effects in terms of the density of particles ρ*=N/V, where V is the accessible volume, the inverse length H1 and the de Broglie’s thermal wavelength λB=h/2πmkT, where k and h are the Boltzmann’s and Planck’s constants, respectively. For the case of extreme and maximum confinement, 0.5<H1<1 and H1=1, respectively, analytical results can be given based on an extension for quantum systems of the Helmholtz free energies for the corresponding classical systems.  相似文献   

12.
13.
We discuss a covariant relativistic Boltzmann equation which describes the evolution of a system of particles in spacetime evolving with a universal invariant parameter τ. The observed time t of Einstein and Maxwell, in the presence of interaction, is not necessarily a monotonic function of τ. If t(τ) increases with τ, the worldline may be associated with a normal particle, but if it is decreasing in τ, it is observed in the laboratory as an antiparticle. This paper discusses the implications for entropy evolution in this relativistic framework. It is shown that if an ensemble of particles and antiparticles, converge in a region of pair annihilation, the entropy of the antiparticle beam may decreaase in time.  相似文献   

14.
Expected Shortfall (ES), the average loss above a high quantile, is the current financial regulatory market risk measure. Its estimation and optimization are highly unstable against sample fluctuations and become impossible above a critical ratio r=N/T, where N is the number of different assets in the portfolio, and T is the length of the available time series. The critical ratio depends on the confidence level α, which means we have a line of critical points on the αr plane. The large fluctuations in the estimation of ES can be attenuated by the application of regularizers. In this paper, we calculate ES analytically under an 1 regularizer by the method of replicas borrowed from the statistical physics of random systems. The ban on short selling, i.e., a constraint rendering all the portfolio weights non-negative, is a special case of an asymmetric 1 regularizer. Results are presented for the out-of-sample and the in-sample estimator of the regularized ES, the estimation error, the distribution of the optimal portfolio weights, and the density of the assets eliminated from the portfolio by the regularizer. It is shown that the no-short constraint acts as a high volatility cutoff, in the sense that it sets the weights of the high volatility elements to zero with higher probability than those of the low volatility items. This cutoff renormalizes the aspect ratio r=N/T, thereby extending the range of the feasibility of optimization. We find that there is a nontrivial mapping between the regularized and unregularized problems, corresponding to a renormalization of the order parameters.  相似文献   

15.
Word embeddings based on a conditional model are commonly used in Natural Language Processing (NLP) tasks to embed the words of a dictionary in a low dimensional linear space. Their computation is based on the maximization of the likelihood of a conditional probability distribution for each word of the dictionary. These distributions form a Riemannian statistical manifold, where word embeddings can be interpreted as vectors in the tangent space of a specific reference measure on the manifold. A novel family of word embeddings, called α-embeddings have been recently introduced as deriving from the geometrical deformation of the simplex of probabilities through a parameter α, using notions from Information Geometry. After introducing the α-embeddings, we show how the deformation of the simplex, controlled by α, provides an extra handle to increase the performances of several intrinsic and extrinsic tasks in NLP. We test the α-embeddings on different tasks with models of increasing complexity, showing that the advantages associated with the use of α-embeddings are present also for models with a large number of parameters. Finally, we show that tuning α allows for higher performances compared to the use of larger models in which additionally a transformation of the embeddings is learned during training, as experimentally verified in attention models.  相似文献   

16.
In this paper, the high-dimensional linear regression model is considered, where the covariates are measured with additive noise. Different from most of the other methods, which are based on the assumption that the true covariates are fully obtained, results in this paper only require that the corrupted covariate matrix is observed. Then, by the application of information theory, the minimax rates of convergence for estimation are investigated in terms of the p(1p<)-losses under the general sparsity assumption on the underlying regression parameter and some regularity conditions on the observed covariate matrix. The established lower and upper bounds on minimax risks agree up to constant factors when p=2, which together provide the information-theoretic limits of estimating a sparse vector in the high-dimensional linear errors-in-variables model. An estimator for the underlying parameter is also proposed and shown to be minimax optimal in the 2-loss.  相似文献   

17.
Probability is an important question in the ontological interpretation of quantum mechanics. It has been discussed in some trajectory interpretations such as Bohmian mechanics and stochastic mechanics. New questions arise when the probability domain extends to the complex space, including the generation of complex trajectory, the definition of the complex probability, and the relation of the complex probability to the quantum probability. The complex treatment proposed in this article applies the optimal quantum guidance law to derive the stochastic differential equation governing a particle’s random motion in the complex plane. The probability distribution ρc(t,x,y) of the particle’s position over the complex plane z=x+iy is formed by an ensemble of the complex quantum random trajectories, which are solved from the complex stochastic differential equation. Meanwhile, the probability distribution ρc(t,x,y) is verified by the solution of the complex Fokker–Planck equation. It is shown that quantum probability |Ψ|2 and classical probability can be integrated under the framework of complex probability ρc(t,x,y), such that they can both be derived from ρc(t,x,y) by different statistical ways of collecting spatial points.  相似文献   

18.
In the nervous system, information is conveyed by sequence of action potentials, called spikes-trains. As MacKay and McCulloch suggested, spike-trains can be represented as bits sequences coming from Information Sources (IS). Previously, we studied relations between spikes’ Information Transmission Rates (ITR) and their correlations, and frequencies. Now, I concentrate on the problem of how spikes fluctuations affect ITR. The IS are typically modeled as stationary stochastic processes, which I consider here as two-state Markov processes. As a spike-trains’ fluctuation measure, I assume the standard deviation σ, which measures the average fluctuation of spikes around the average spike frequency. I found that the character of ITR and signal fluctuations relation strongly depends on the parameter s being a sum of transitions probabilities from a no spike state to spike state. The estimate of the Information Transmission Rate was found by expressions depending on the values of signal fluctuations and parameter s. It turned out that for smaller s<1, the quotient ITRσ has a maximum and can tend to zero depending on transition probabilities, while for 1<s, the ITRσ is separated from 0. Additionally, it was also shown that ITR quotient by variance behaves in a completely different way. Similar behavior was observed when classical Shannon entropy terms in the Markov entropy formula are replaced by their approximation with polynomials. My results suggest that in a noisier environment (1<s), to get appropriate reliability and efficiency of transmission, IS with higher tendency of transition from the no spike to spike state should be applied. Such selection of appropriate parameters plays an important role in designing learning mechanisms to obtain networks with higher performance.  相似文献   

19.
Through the research presented herein, it is quite clear that there are two thermodynamically distinct types (A and B) of energetic processes naturally occurring on Earth. Type A, such as glycolysis and the tricarboxylic acid cycle, apparently follows the second law well; Type B, as exemplified by the thermotrophic function with transmembrane electrostatically localized protons presented here, does not necessarily have to be constrained by the second law, owing to its special asymmetric function. This study now, for the first time, numerically shows that transmembrane electrostatic proton localization (Type-B process) represents a negative entropy event with a local protonic entropy change (ΔSL) in a range from −95 to −110 J/K∙mol. This explains the relationship between both the local protonic entropy change (ΔSL) and the mitochondrial environmental temperature (T) and the local protonic Gibbs free energy (ΔGL=TΔSL) in isothermal environmental heat utilization. The energy efficiency for the utilization of total protonic Gibbs free energy (ΔGT including ΔGL=TΔSL) in driving the synthesis of ATP is estimated to be about 60%, indicating that a significant fraction of the environmental heat energy associated with the thermal motion kinetic energy (kBT) of transmembrane electrostatically localized protons is locked into the chemical form of energy in ATP molecules. Fundamentally, it is the combination of water as a protonic conductor, and thus the formation of protonic membrane capacitor, with asymmetric structures of mitochondrial membrane and cristae that makes this amazing thermotrophic feature possible. The discovery of energy Type-B processes has inspired an invention (WO 2019/136037 A1) for energy renewal through isothermal environmental heat energy utilization with an asymmetric electron-gated function to generate electricity, which has the potential to power electronic devices forever, including mobile phones and laptops. This invention, as an innovative Type-B mimic, may have many possible industrial applications and is likely to be transformative in energy science and technologies for sustainability on Earth.  相似文献   

20.
In this paper, we study the entropy functions on extreme rays of the polymatroidal region which contain a matroid, i.e., matroidal entropy functions. We introduce variable strength orthogonal arrays indexed by a connected matroid M and positive integer v which can be regarded as expanding the classic combinatorial structure orthogonal arrays. It is interesting that they are equivalent to the partition-representations of the matroid M with degree v and the (M,v) almost affine codes. Thus, a synergy among four fields, i.e., information theory, matroid theory, combinatorial design, and coding theory is developed, which may lead to potential applications in information problems such as network coding and secret-sharing. Leveraging the construction of variable strength orthogonal arrays, we characterize all matroidal entropy functions of order n5 with the exception of log10·U2,5 and logv·U3,5 for some v.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号