首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
A possible detection of sub-solar mass ultra-compact objects would lead to new perspectives on the existence of black holes that are not of astrophysical origin and/or pertain to formation scenarios of exotic ultra-compact objects. Both possibilities open new perspectives for better understanding of our universe. In this work, we investigate the significance of detection of sub-solar mass binaries with components mass in the range: 102M up to 1M, within the expected sensitivity of the ground-based gravitational waves detectors of third generation, viz., the Einstein Telescope (ET) and the Cosmic Explorer (CE). Assuming a minimum of amplitude signal-to-noise ratio for detection, viz., ρ=8, we find that the maximum horizon distances for an ultra-compact binary system with components mass 102M and 1M are 40 Mpc and 1.89 Gpc, respectively, for ET, and 125 Mpc and 5.8 Gpc, respectively, for CE. Other cases are also presented in the text. We derive the merger rate and discuss consequences on the abundances of primordial black hole (PBH), fPBH. Considering the entire mass range [102–1]M, we find fPBH<0.70 (<0.06) for ET (CE), respectively.  相似文献   

2.
Private Information Retrieval (PIR) protocols, which allow the client to obtain data from servers without revealing its request, have many applications such as anonymous communication, media streaming, blockchain security, advertisement, etc. Multi-server PIR protocols, where the database is replicated among the non-colluding servers, provide high efficiency in the information-theoretic setting. Beimel et al. in CCC 12’ (further referred to as BIKO) put forward a paradigm for constructing multi-server PIR, capturing several previous constructions for k3 servers, as well as improving the best-known share complexity for 3-server PIR. A key component there is a share conversion scheme from corresponding linear three-party secret sharing schemes with respect to a certain type of “modified universal” relation. In a useful particular instantiation of the paradigm, they used a share conversion from (2,3)-CNF over Zm to three-additive sharing over Zpβ for primes p1,p2,p where p1p2 and m=p1·p2, and the relation is modified universal relation CSm. They reduced the question of the existence of the share conversion for a triple (p1,p2,p) to the (in)solvability of a certain linear system over Zp, and provided an efficient (in m,logp) construction of such a sharing scheme. Unfortunately, the size of the system is Θ(m2) which entails the infeasibility of a direct solution for big m’s in practice. Paskin-Cherniavsky and Schmerler in 2019 proved the existence of the conversion for the case of odd p1, p2 when p=p1, obtaining in this way infinitely many parameters for which the conversion exists, but also for infinitely many of them it remained open. In this work, using some algebraic techniques from the work of Paskin-Cherniavsky and Schmerler, we prove the existence of the conversion for even m’s in case p=2 (we computed β in this case) and the absence of the conversion for even m’s in case p>2. This does not improve the concrete efficiency of 3-server PIR; however, our result is promising in a broader context of constructing PIR through composition techniques with k3 servers, using the relation CSm where m has more than two prime divisors. Another our suggestion about 3-server PIR is that it’s possible to achieve a shorter server’s response using the relation CSm for extended SmSm. By computer search, in BIKO framework we found several such sets for small m’s which result in share conversion from (2,3)-CNF over Zm to 3-additive secret sharing over Zpβ, where β>0 is several times less than β, which implies several times shorter server’s response. We also suggest that such extended sets Sm can result in better PIR due to the potential existence of matching vector families with the higher Vapnik-Chervonenkis dimension.  相似文献   

3.
Detrended Fluctuation Analysis (DFA) has become a standard method to quantify the correlations and scaling properties of real-world complex time series. For a given scale of observation, DFA provides the function F(), which quantifies the fluctuations of the time series around the local trend, which is substracted (detrended). If the time series exhibits scaling properties, then F()α asymptotically, and the scaling exponent α is typically estimated as the slope of a linear fitting in the logF() vs. log() plot. In this way, α measures the strength of the correlations and characterizes the underlying dynamical system. However, in many cases, and especially in a physiological time series, the scaling behavior is different at short and long scales, resulting in logF() vs. log() plots with two different slopes, α1 at short scales and α2 at large scales of observation. These two exponents are usually associated with the existence of different mechanisms that work at distinct time scales acting on the underlying dynamical system. Here, however, and since the power-law behavior of F() is asymptotic, we question the use of α1 to characterize the correlations at short scales. To this end, we show first that, even for artificial time series with perfect scaling, i.e., with a single exponent α valid for all scales, DFA provides an α1 value that systematically overestimates the true exponent α. In addition, second, when artificial time series with two different scaling exponents at short and large scales are considered, the α1 value provided by DFA not only can severely underestimate or overestimate the true short-scale exponent, but also depends on the value of the large scale exponent. This behavior should prevent the use of α1 to describe the scaling properties at short scales: if DFA is used in two time series with the same scaling behavior at short scales but very different scaling properties at large scales, very different values of α1 will be obtained, although the short scale properties are identical. These artifacts may lead to wrong interpretations when analyzing real-world time series: on the one hand, for time series with truly perfect scaling, the spurious value of α1 could lead to wrongly thinking that there exists some specific mechanism acting only at short time scales in the dynamical system. On the other hand, for time series with true different scaling at short and large scales, the incorrect α1 value would not characterize properly the short scale behavior of the dynamical system.  相似文献   

4.
Two-dimensional fuzzy entropy, dispersion entropy, and their multiscale extensions (MFuzzyEn2D and MDispEn2D, respectively) have shown promising results for image classifications. However, these results rely on the selection of key parameters that may largely influence the entropy values obtained. Yet, the optimal choice for these parameters has not been studied thoroughly. We propose a study on the impact of these parameters in image classification. For this purpose, the entropy-based algorithms are applied to a variety of images from different datasets, each containing multiple image classes. Several parameter combinations are used to obtain the entropy values. These entropy values are then applied to a range of machine learning classifiers and the algorithm parameters are analyzed based on the classification results. By using specific parameters, we show that both MFuzzyEn2D and MDispEn2D approach state-of-the-art in terms of image classification for multiple image types. They lead to an average maximum accuracy of more than 95% for all the datasets tested. Moreover, MFuzzyEn2D results in a better classification performance than that extracted by MDispEn2D as a majority. Furthermore, the choice of classifier does not have a significant impact on the classification of the extracted features by both entropy algorithms. The results open new perspectives for these entropy-based measures in textural analysis.  相似文献   

5.
Based on Kedem–Katchalsky formalism, the model equation of the membrane potential (Δψs) generated in a membrane system was derived for the conditions of concentration polarization. In this system, a horizontally oriented electro-neutral biomembrane separates solutions of the same electrolytes at different concentrations. The consequence of concentration polarization is the creation, on both sides of the membrane, of concentration boundary layers. The basic equation of this model includes the unknown ratio of solution concentrations (Ci/Ce) at the membrane/concentration boundary layers. We present the calculation procedure (Ci/Ce) based on novel equations derived in the paper containing the transport parameters of the membrane (Lp, σ, and ω), solutions (ρ, ν), concentration boundary layer thicknesses (δl, δh), concentration Raileigh number (RC), concentration polarization factor (ζs), volume flux (Jv), mechanical pressure difference (ΔP), and ratio of known solution concentrations (Ch/Cl). From the resulting equation, Δψs was calculated for various combinations of the solution concentration ratio (Ch/Cl), the Rayleigh concentration number (RC), the concentration polarization coefficient (ζs), and the hydrostatic pressure difference (ΔP). Calculations were performed for a case where an aqueous NaCl solution with a fixed concentration of 1 mol m−3 (Cl) was on one side of the membrane and on the other side an aqueous NaCl solution with a concentration between 1 and 15 mol m−3 (Ch). It is shown that (Δψs) depends on the value of one of the factors (i.e., ΔP, Ch/Cl, RC and ζs) at a fixed value of the other three.  相似文献   

6.
7.
Studies from complex networks have increased in recent years, and different applications have been utilized in geophysics. Seismicity represents a complex and dynamic system that has open questions related to earthquake occurrence. In this work, we carry out an analysis to understand the physical interpretation of two metrics of complex systems: the slope of the probability distribution of connectivity (γ) and the betweenness centrality (BC). To conduct this study, we use seismic datasets recorded from three large earthquakes that occurred in Chile: the Mw8.2 Iquique earthquake (2014), the Mw8.4 Illapel earthquake (2015) and the Mw8.8 Cauquenes earthquake (2010). We find a linear relationship between the b-value and the γ value, with an interesting finding about the ratio between the b-value and γ that gives a value of ∼0.4. We also explore a possible physical meaning of the BC. As a first result, we find that the behaviour of this metric is not the same for the three large earthquakes, and it seems that this metric is not related to the b-value and coupling of the zone. We present the first results about the physical meaning of metrics from complex networks in seismicity. These first results are promising, and we hope to be able to carry out further analyses to understand the physics that these complex network parameters represent in a seismic system.  相似文献   

8.
We study the viable Starobinsky f(R) dark energy model in spatially non-flat FLRW backgrounds, where f(R)=RλRch[1(1+R2/Rch2)1] with Rch and λ representing the characteristic curvature scale and model parameter, respectively. We modify CAMB and CosmoMC packages with the recent observational data to constrain Starobinsky f(R) gravity and the density parameter of curvature ΩK. In particular, we find the model and density parameters to be λ1<0.283 at 68% C.L. and ΩK=0.000990.0042+0.0044 at 95% C.L., respectively. The best χ2 fitting result shows that χf(R)2χΛCDM2, indicating that the viable f(R) gravity model is consistent with ΛCDM when ΩK is set as a free parameter. We also evaluate the values of AIC, BIC and DIC for the best fitting results of f(R) and ΛCDM models in the non-flat universe.  相似文献   

9.
10.
This paper systematically presents the λ-deformation as the canonical framework of deformation to the dually flat (Hessian) geometry, which has been well established in information geometry. We show that, based on deforming the Legendre duality, all objects in the Hessian case have their correspondence in the λ-deformed case: λ-convexity, λ-conjugation, λ-biorthogonality, λ-logarithmic divergence, λ-exponential and λ-mixture families, etc. In particular, λ-deformation unifies Tsallis and Rényi deformations by relating them to two manifestations of an identical λ-exponential family, under subtractive or divisive probability normalization, respectively. Unlike the different Hessian geometries of the exponential and mixture families, the λ-exponential family, in turn, coincides with the λ-mixture family after a change of random variables. The resulting statistical manifolds, while still carrying a dualistic structure, replace the Hessian metric and a pair of dually flat conjugate affine connections with a conformal Hessian metric and a pair of projectively flat connections carrying constant (nonzero) curvature. Thus, λ-deformation is a canonical framework in generalizing the well-known dually flat Hessian structure of information geometry.  相似文献   

11.
Since the grand partition function Zq for the so-called q-particles (i.e., quons), q(1,1), cannot be computed by using the standard 2nd quantisation technique involving the full Fock space construction for q=0, and its q-deformations for the remaining cases, we determine such grand partition functions in order to obtain the natural generalisation of the Plank distribution to q[1,1]. We also note the (non) surprising fact that the right grand partition function concerning the Boltzmann case (i.e., q=0) can be easily obtained by using the full Fock space 2nd quantisation, by considering the appropriate correction by the Gibbs factor 1/n! in the n term of the power series expansion with respect to the fugacity z. As an application, we briefly discuss the equations of the state for a gas of free quons or the condensation phenomenon into the ground state, also occurring for the Bose-like quons q(0,1).  相似文献   

12.
Characterizing the topology and random walk of a random network is difficult because the connections in the network are uncertain. We propose a class of the generalized weighted Koch network by replacing the triangles in the traditional Koch network with a graph Rs according to probability 0p1 and assign weight to the network. Then, we determine the range of several indicators that can characterize the topological properties of generalized weighted Koch networks by examining the two models under extreme conditions, p=0 and p=1, including average degree, degree distribution, clustering coefficient, diameter, and average weighted shortest path. In addition, we give a lower bound on the average trapping time (ATT) in the trapping problem of generalized weighted Koch networks and also reveal the linear, super-linear, and sub-linear relationships between ATT and the number of nodes in the network.  相似文献   

13.
Magnetic shape-memory materials are potential magnetic refrigerants, due the caloric properties of their magnetic-field-induced martensitic transformation. The first-order nature of the martensitic transition may be the origin of hysteresis effects that can hinder practical applications. Moreover, the presence of latent heat in these transitions requires direct methods to measure the entropy and to correctly analyze the magnetocaloric effect. Here, we investigated the magnetocaloric effect in the Heusler material Ni1.7Pt0.3MnGa by combining an indirect approach to determine the entropy change from isofield magnetization curves and direct heat-flow measurements using a Peltier calorimeter. Our results demonstrate that the magnetic entropy change ΔS in the vicinity of the first-order martensitic phase transition depends on the measuring method and is directly connected with the temperature and field history of the experimental processes.  相似文献   

14.
In this work, a finite element (FE) method is discussed for the 3D steady Navier–Stokes equations by using the finite element pair Xh×Mh. The method consists of transmitting the finite element solution (uh,ph) of the 3D steady Navier–Stokes equations into the finite element solution pairs (uhn,phn) based on the finite element space pair Xh×Mh of the 3D steady linearized Navier–Stokes equations by using the Stokes, Newton and Oseen iterative methods, where the finite element space pair Xh×Mh satisfies the discrete inf-sup condition in a 3D domain Ω. Here, we present the weak formulations of the FE method for solving the 3D steady Stokes, Newton and Oseen iterative equations, provide the existence and uniqueness of the FE solution (uhn,phn) of the 3D steady Stokes, Newton and Oseen iterative equations, and deduce the convergence with respect to (σ,h) of the FE solution (uhn,phn) to the exact solution (u,p) of the 3D steady Navier–Stokes equations in the H1L2 norm. Finally, we also give the convergence order with respect to (σ,h) of the FE velocity uhn to the exact velocity u of the 3D steady Navier–Stokes equations in the L2 norm.  相似文献   

15.
In this paper, we establish new (p,q)κ1-integral and (p,q)κ2-integral identities. By employing these new identities, we establish new (p,q)κ1 and (p,q)κ2- trapezoidal integral-type inequalities through strongly convex and quasi-convex functions. Finally, some examples are given to illustrate the investigated results.  相似文献   

16.
We use an m-vicinity method to examine Ising models on hypercube lattices of high dimensions d3. This method is applicable for both short-range and long-range interactions. We introduce a small parameter, which determines whether the method can be used when calculating the free energy. When we account for interaction with the nearest neighbors only, the value of this parameter depends on the dimension of the lattice d. We obtain an expression for the critical temperature in terms of the interaction constants that is in a good agreement with the results of computer simulations. For d=5,6,7, our theoretical estimates match the numerical results both qualitatively and quantitatively. For d=3,4, our method is sufficiently accurate for the calculation of the critical temperatures; however, it predicts a finite jump of the heat capacity at the critical point. In the case of the three-dimensional lattice (d=3), this contradicts the commonly accepted ideas of the type of the singularity at the critical point. For the four-dimensional lattice (d=4), the character of the singularity is under current discussion. For the dimensions d=1, 2 the m-vicinity method is not applicable.  相似文献   

17.
The aim of this paper is to show that α-limit sets in Lorenz maps do not have to be completely invariant. This highlights unexpected dynamical behavior in these maps, showing gaps existing in the literature. Similar result is obtained for unimodal maps on [0,1]. On the basis of provided examples, we also present how the performed study on the structure of α-limit sets is closely connected with the calculation of the topological entropy.  相似文献   

18.
19.
We present a coupled variational autoencoder (VAE) method, which improves the accuracy and robustness of the model representation of handwritten numeral images. The improvement is measured in both increasing the likelihood of the reconstructed images and in reducing divergence between the posterior and a prior latent distribution. The new method weighs outlier samples with a higher penalty by generalizing the original evidence lower bound function using a coupled entropy function based on the principles of nonlinear statistical coupling. We evaluated the performance of the coupled VAE model using the Modified National Institute of Standards and Technology (MNIST) dataset and its corrupted modification C-MNIST. Histograms of the likelihood that the reconstruction matches the original image show that the coupled VAE improves the reconstruction and this improvement is more substantial when seeded with corrupted images. All five corruptions evaluated showed improvement. For instance, with the Gaussian corruption seed the accuracy improves by 1014 (from 1057.2 to 1042.9) and robustness improves by 1022 (from 10109.2 to 1087.0). Furthermore, the divergence between the posterior and prior distribution of the latent distribution is reduced. Thus, in contrast to the β-VAE design, the coupled VAE algorithm improves model representation, rather than trading off the performance of the reconstruction and latent distribution divergence.  相似文献   

20.
The discrepancy among one-electron and two-electron densities for diverse N-electron atomss, enclosing neutral systems (with nuclear charge Z=N) and charge-one ions (|NZ|=1), is quantified by means of mutual information, I, and Quantum Similarity Index, QSI, in the conjugate spaces position/momentum. These differences can be interpreted as a measure of the electron correlation of the system. The analysis is carried out by considering systems with a nuclear charge up to Z=103 and singly charged ions (cations and anions) as far as N=54. The interelectronic correlation, for any given system, is quantified through the comparison of its double-variable electron pair density and the product of the respective one-particle densities. An in-depth study along the Periodic Table reveals the importance, far beyond the weight of the systems considered, of their shell structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号