首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Based on Kedem–Katchalsky formalism, the model equation of the membrane potential (Δψs) generated in a membrane system was derived for the conditions of concentration polarization. In this system, a horizontally oriented electro-neutral biomembrane separates solutions of the same electrolytes at different concentrations. The consequence of concentration polarization is the creation, on both sides of the membrane, of concentration boundary layers. The basic equation of this model includes the unknown ratio of solution concentrations (Ci/Ce) at the membrane/concentration boundary layers. We present the calculation procedure (Ci/Ce) based on novel equations derived in the paper containing the transport parameters of the membrane (Lp, σ, and ω), solutions (ρ, ν), concentration boundary layer thicknesses (δl, δh), concentration Raileigh number (RC), concentration polarization factor (ζs), volume flux (Jv), mechanical pressure difference (ΔP), and ratio of known solution concentrations (Ch/Cl). From the resulting equation, Δψs was calculated for various combinations of the solution concentration ratio (Ch/Cl), the Rayleigh concentration number (RC), the concentration polarization coefficient (ζs), and the hydrostatic pressure difference (ΔP). Calculations were performed for a case where an aqueous NaCl solution with a fixed concentration of 1 mol m−3 (Cl) was on one side of the membrane and on the other side an aqueous NaCl solution with a concentration between 1 and 15 mol m−3 (Ch). It is shown that (Δψs) depends on the value of one of the factors (i.e., ΔP, Ch/Cl, RC and ζs) at a fixed value of the other three.  相似文献   

2.
Detrended Fluctuation Analysis (DFA) has become a standard method to quantify the correlations and scaling properties of real-world complex time series. For a given scale of observation, DFA provides the function F(), which quantifies the fluctuations of the time series around the local trend, which is substracted (detrended). If the time series exhibits scaling properties, then F()α asymptotically, and the scaling exponent α is typically estimated as the slope of a linear fitting in the logF() vs. log() plot. In this way, α measures the strength of the correlations and characterizes the underlying dynamical system. However, in many cases, and especially in a physiological time series, the scaling behavior is different at short and long scales, resulting in logF() vs. log() plots with two different slopes, α1 at short scales and α2 at large scales of observation. These two exponents are usually associated with the existence of different mechanisms that work at distinct time scales acting on the underlying dynamical system. Here, however, and since the power-law behavior of F() is asymptotic, we question the use of α1 to characterize the correlations at short scales. To this end, we show first that, even for artificial time series with perfect scaling, i.e., with a single exponent α valid for all scales, DFA provides an α1 value that systematically overestimates the true exponent α. In addition, second, when artificial time series with two different scaling exponents at short and large scales are considered, the α1 value provided by DFA not only can severely underestimate or overestimate the true short-scale exponent, but also depends on the value of the large scale exponent. This behavior should prevent the use of α1 to describe the scaling properties at short scales: if DFA is used in two time series with the same scaling behavior at short scales but very different scaling properties at large scales, very different values of α1 will be obtained, although the short scale properties are identical. These artifacts may lead to wrong interpretations when analyzing real-world time series: on the one hand, for time series with truly perfect scaling, the spurious value of α1 could lead to wrongly thinking that there exists some specific mechanism acting only at short time scales in the dynamical system. On the other hand, for time series with true different scaling at short and large scales, the incorrect α1 value would not characterize properly the short scale behavior of the dynamical system.  相似文献   

3.
In this work, a finite element (FE) method is discussed for the 3D steady Navier–Stokes equations by using the finite element pair Xh×Mh. The method consists of transmitting the finite element solution (uh,ph) of the 3D steady Navier–Stokes equations into the finite element solution pairs (uhn,phn) based on the finite element space pair Xh×Mh of the 3D steady linearized Navier–Stokes equations by using the Stokes, Newton and Oseen iterative methods, where the finite element space pair Xh×Mh satisfies the discrete inf-sup condition in a 3D domain Ω. Here, we present the weak formulations of the FE method for solving the 3D steady Stokes, Newton and Oseen iterative equations, provide the existence and uniqueness of the FE solution (uhn,phn) of the 3D steady Stokes, Newton and Oseen iterative equations, and deduce the convergence with respect to (σ,h) of the FE solution (uhn,phn) to the exact solution (u,p) of the 3D steady Navier–Stokes equations in the H1L2 norm. Finally, we also give the convergence order with respect to (σ,h) of the FE velocity uhn to the exact velocity u of the 3D steady Navier–Stokes equations in the L2 norm.  相似文献   

4.
Private Information Retrieval (PIR) protocols, which allow the client to obtain data from servers without revealing its request, have many applications such as anonymous communication, media streaming, blockchain security, advertisement, etc. Multi-server PIR protocols, where the database is replicated among the non-colluding servers, provide high efficiency in the information-theoretic setting. Beimel et al. in CCC 12’ (further referred to as BIKO) put forward a paradigm for constructing multi-server PIR, capturing several previous constructions for k3 servers, as well as improving the best-known share complexity for 3-server PIR. A key component there is a share conversion scheme from corresponding linear three-party secret sharing schemes with respect to a certain type of “modified universal” relation. In a useful particular instantiation of the paradigm, they used a share conversion from (2,3)-CNF over Zm to three-additive sharing over Zpβ for primes p1,p2,p where p1p2 and m=p1·p2, and the relation is modified universal relation CSm. They reduced the question of the existence of the share conversion for a triple (p1,p2,p) to the (in)solvability of a certain linear system over Zp, and provided an efficient (in m,logp) construction of such a sharing scheme. Unfortunately, the size of the system is Θ(m2) which entails the infeasibility of a direct solution for big m’s in practice. Paskin-Cherniavsky and Schmerler in 2019 proved the existence of the conversion for the case of odd p1, p2 when p=p1, obtaining in this way infinitely many parameters for which the conversion exists, but also for infinitely many of them it remained open. In this work, using some algebraic techniques from the work of Paskin-Cherniavsky and Schmerler, we prove the existence of the conversion for even m’s in case p=2 (we computed β in this case) and the absence of the conversion for even m’s in case p>2. This does not improve the concrete efficiency of 3-server PIR; however, our result is promising in a broader context of constructing PIR through composition techniques with k3 servers, using the relation CSm where m has more than two prime divisors. Another our suggestion about 3-server PIR is that it’s possible to achieve a shorter server’s response using the relation CSm for extended SmSm. By computer search, in BIKO framework we found several such sets for small m’s which result in share conversion from (2,3)-CNF over Zm to 3-additive secret sharing over Zpβ, where β>0 is several times less than β, which implies several times shorter server’s response. We also suggest that such extended sets Sm can result in better PIR due to the potential existence of matching vector families with the higher Vapnik-Chervonenkis dimension.  相似文献   

5.
A possible detection of sub-solar mass ultra-compact objects would lead to new perspectives on the existence of black holes that are not of astrophysical origin and/or pertain to formation scenarios of exotic ultra-compact objects. Both possibilities open new perspectives for better understanding of our universe. In this work, we investigate the significance of detection of sub-solar mass binaries with components mass in the range: 102M up to 1M, within the expected sensitivity of the ground-based gravitational waves detectors of third generation, viz., the Einstein Telescope (ET) and the Cosmic Explorer (CE). Assuming a minimum of amplitude signal-to-noise ratio for detection, viz., ρ=8, we find that the maximum horizon distances for an ultra-compact binary system with components mass 102M and 1M are 40 Mpc and 1.89 Gpc, respectively, for ET, and 125 Mpc and 5.8 Gpc, respectively, for CE. Other cases are also presented in the text. We derive the merger rate and discuss consequences on the abundances of primordial black hole (PBH), fPBH. Considering the entire mass range [102–1]M, we find fPBH<0.70 (<0.06) for ET (CE), respectively.  相似文献   

6.
This study deals with drift parameters estimation problems in the sub-fractional Vasicek process given by dxt=θ(μxt)dt+dStH, with θ>0, μR being unknown and t0; here, SH represents a sub-fractional Brownian motion (sfBm). We introduce new estimators θ^ for θ and μ^ for μ based on discrete time observations and use techniques from Nordin–Peccati analysis. For the proposed estimators θ^ and μ^, strong consistency and the asymptotic normality were established by employing the properties of SH. Moreover, we provide numerical simulations for sfBm and related Vasicek-type process with different values of the Hurst index H.  相似文献   

7.
We study the viable Starobinsky f(R) dark energy model in spatially non-flat FLRW backgrounds, where f(R)=RλRch[1(1+R2/Rch2)1] with Rch and λ representing the characteristic curvature scale and model parameter, respectively. We modify CAMB and CosmoMC packages with the recent observational data to constrain Starobinsky f(R) gravity and the density parameter of curvature ΩK. In particular, we find the model and density parameters to be λ1<0.283 at 68% C.L. and ΩK=0.000990.0042+0.0044 at 95% C.L., respectively. The best χ2 fitting result shows that χf(R)2χΛCDM2, indicating that the viable f(R) gravity model is consistent with ΛCDM when ΩK is set as a free parameter. We also evaluate the values of AIC, BIC and DIC for the best fitting results of f(R) and ΛCDM models in the non-flat universe.  相似文献   

8.
The review deals with a novel approach (MNEQT) to nonequilibrium thermodynamics (NEQT) that is based on the concept of internal equilibrium (IEQ) in an enlarged state space SZ involving internal variables as additional state variables. The IEQ macrostates are unique in SZ and have no memory just as EQ macrostates are in the EQ state space SXSZ. The approach provides a clear strategy to identify the internal variables for any model through several examples. The MNEQT deals directly with system-intrinsic quantities, which are very useful as they fully describe irreversibility. Because of this, MNEQT solves a long-standing problem in NEQT of identifying a unique global temperature T of a system, thus fulfilling Planck’s dream of a global temperature for any system, even if it is not uniform such as when it is driven between two heat baths; T has the conventional interpretation of satisfying the Clausius statement that the exchange macroheatdeQflows from hot to cold, and other sensible criteria expected of a temperature. The concept of the generalized macroheat dQ=deQ+diQ converts the Clausius inequality dSdeQ/T0 for a system in a medium at temperature T0 into the Clausius equalitydSdQ/T, which also covers macrostates with memory, and follows from the extensivity property. The equality also holds for a NEQ isolated system. The novel approach is extremely useful as it also works when no internal state variables are used to study nonunique macrostates in the EQ state space SX at the expense of explicit time dependence in the entropy that gives rise to memory effects. To show the usefulness of the novel approach, we give several examples such as irreversible Carnot cycle, friction and Brownian motion, the free expansion, etc.  相似文献   

9.
In this work, first, we consider novel parameterized identities for the left and right part of the (p,q)-analogue of Hermite–Hadamard inequality. Second, using these new parameterized identities, we give new parameterized (p,q)-trapezoid and parameterized (p,q)-midpoint type integral inequalities via η-quasiconvex function. By changing values of parameter μ[0,1], some new special cases from the main results are obtained and some known results are recaptured as well. Finally, at the end, an application to special means is given as well. This new research has the potential to establish new boundaries in comparative literature and some well-known implications. From an application perspective, the proposed research on the η-quasiconvex function has interesting results that illustrate the applicability and superiority of the results obtained.  相似文献   

10.
In this paper, we establish new (p,q)κ1-integral and (p,q)κ2-integral identities. By employing these new identities, we establish new (p,q)κ1 and (p,q)κ2- trapezoidal integral-type inequalities through strongly convex and quasi-convex functions. Finally, some examples are given to illustrate the investigated results.  相似文献   

11.
12.
Studies from complex networks have increased in recent years, and different applications have been utilized in geophysics. Seismicity represents a complex and dynamic system that has open questions related to earthquake occurrence. In this work, we carry out an analysis to understand the physical interpretation of two metrics of complex systems: the slope of the probability distribution of connectivity (γ) and the betweenness centrality (BC). To conduct this study, we use seismic datasets recorded from three large earthquakes that occurred in Chile: the Mw8.2 Iquique earthquake (2014), the Mw8.4 Illapel earthquake (2015) and the Mw8.8 Cauquenes earthquake (2010). We find a linear relationship between the b-value and the γ value, with an interesting finding about the ratio between the b-value and γ that gives a value of ∼0.4. We also explore a possible physical meaning of the BC. As a first result, we find that the behaviour of this metric is not the same for the three large earthquakes, and it seems that this metric is not related to the b-value and coupling of the zone. We present the first results about the physical meaning of metrics from complex networks in seismicity. These first results are promising, and we hope to be able to carry out further analyses to understand the physics that these complex network parameters represent in a seismic system.  相似文献   

13.
The aim of this paper is to show that α-limit sets in Lorenz maps do not have to be completely invariant. This highlights unexpected dynamical behavior in these maps, showing gaps existing in the literature. Similar result is obtained for unimodal maps on [0,1]. On the basis of provided examples, we also present how the performed study on the structure of α-limit sets is closely connected with the calculation of the topological entropy.  相似文献   

14.
15.
We use an m-vicinity method to examine Ising models on hypercube lattices of high dimensions d3. This method is applicable for both short-range and long-range interactions. We introduce a small parameter, which determines whether the method can be used when calculating the free energy. When we account for interaction with the nearest neighbors only, the value of this parameter depends on the dimension of the lattice d. We obtain an expression for the critical temperature in terms of the interaction constants that is in a good agreement with the results of computer simulations. For d=5,6,7, our theoretical estimates match the numerical results both qualitatively and quantitatively. For d=3,4, our method is sufficiently accurate for the calculation of the critical temperatures; however, it predicts a finite jump of the heat capacity at the critical point. In the case of the three-dimensional lattice (d=3), this contradicts the commonly accepted ideas of the type of the singularity at the critical point. For the four-dimensional lattice (d=4), the character of the singularity is under current discussion. For the dimensions d=1, 2 the m-vicinity method is not applicable.  相似文献   

16.
17.
This paper systematically presents the λ-deformation as the canonical framework of deformation to the dually flat (Hessian) geometry, which has been well established in information geometry. We show that, based on deforming the Legendre duality, all objects in the Hessian case have their correspondence in the λ-deformed case: λ-convexity, λ-conjugation, λ-biorthogonality, λ-logarithmic divergence, λ-exponential and λ-mixture families, etc. In particular, λ-deformation unifies Tsallis and Rényi deformations by relating them to two manifestations of an identical λ-exponential family, under subtractive or divisive probability normalization, respectively. Unlike the different Hessian geometries of the exponential and mixture families, the λ-exponential family, in turn, coincides with the λ-mixture family after a change of random variables. The resulting statistical manifolds, while still carrying a dualistic structure, replace the Hessian metric and a pair of dually flat conjugate affine connections with a conformal Hessian metric and a pair of projectively flat connections carrying constant (nonzero) curvature. Thus, λ-deformation is a canonical framework in generalizing the well-known dually flat Hessian structure of information geometry.  相似文献   

18.
19.
We used the blast wave model with the Boltzmann–Gibbs statistics and analyzed the experimental data measured by the NA61/SHINE Collaboration in inelastic (INEL) proton–proton collisions at different rapidity slices at different center-of-mass energies. The particles used in this study were π+, π, K+, K, and p¯. We extracted the kinetic freeze-out temperature, transverse flow velocity, and kinetic freeze-out volume from the transverse momentum spectra of the particles. We observed that the kinetic freeze-out temperature is rapidity and energy dependent, while the transverse flow velocity does not depend on them. Furthermore, we observed that the kinetic freeze-out volume is energy dependent, but it remains constant with changing the rapidity. We also observed that all three parameters are mass dependent. In addition, with the increase of mass, the kinetic freeze-out temperature increases, and the transverse flow velocity, as well as kinetic freeze-out volume decrease.  相似文献   

20.
Recently, Savaré-Toscani proved that the Rényi entropy power of general probability densities solving the p-nonlinear heat equation in Rn is a concave function of time under certain conditions of three parameters n,p,μ , which extends Costa’s concavity inequality for Shannon’s entropy power to the Rényi entropy power. In this paper, we give a condition Φ(n,p,μ) of n,p,μ under which the concavity of the Rényi entropy power is valid. The condition Φ(n,p,μ) contains Savaré-Toscani’s condition as a special case and much more cases. Precisely, the points (n,p,μ) satisfying Savaré-Toscani’s condition consist of a two-dimensional subset of R3 , and the points satisfying the condition Φ(n,p,μ) consist a three-dimensional subset of R3 . Furthermore, Φ(n,p,μ) gives the necessary and sufficient condition in a certain sense. Finally, the conditions are obtained with a systematic approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号