首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A possible detection of sub-solar mass ultra-compact objects would lead to new perspectives on the existence of black holes that are not of astrophysical origin and/or pertain to formation scenarios of exotic ultra-compact objects. Both possibilities open new perspectives for better understanding of our universe. In this work, we investigate the significance of detection of sub-solar mass binaries with components mass in the range: 102M up to 1M, within the expected sensitivity of the ground-based gravitational waves detectors of third generation, viz., the Einstein Telescope (ET) and the Cosmic Explorer (CE). Assuming a minimum of amplitude signal-to-noise ratio for detection, viz., ρ=8, we find that the maximum horizon distances for an ultra-compact binary system with components mass 102M and 1M are 40 Mpc and 1.89 Gpc, respectively, for ET, and 125 Mpc and 5.8 Gpc, respectively, for CE. Other cases are also presented in the text. We derive the merger rate and discuss consequences on the abundances of primordial black hole (PBH), fPBH. Considering the entire mass range [102–1]M, we find fPBH<0.70 (<0.06) for ET (CE), respectively.  相似文献   

2.
In this investigation, for convex functions, some new (p,q)–Hermite–Hadamard-type inequalities using the notions of (p,q)π2 derivative and (p,q)π2 integral are obtained. Furthermore, for (p,q)π2-differentiable convex functions, some new (p,q) estimates for midpoint and trapezoidal-type inequalities using the notions of (p,q)π2 integral are offered. It is also shown that the newly proved results for p=1 and q1 can be converted into some existing results. Finally, we discuss how the special means can be used to address newly discovered inequalities.  相似文献   

3.
This study deals with drift parameters estimation problems in the sub-fractional Vasicek process given by dxt=θ(μxt)dt+dStH, with θ>0, μR being unknown and t0; here, SH represents a sub-fractional Brownian motion (sfBm). We introduce new estimators θ^ for θ and μ^ for μ based on discrete time observations and use techniques from Nordin–Peccati analysis. For the proposed estimators θ^ and μ^, strong consistency and the asymptotic normality were established by employing the properties of SH. Moreover, we provide numerical simulations for sfBm and related Vasicek-type process with different values of the Hurst index H.  相似文献   

4.
In this paper, we present a new method for the construction of maximally entangled states in CdCd when d2d. A systematic way of constructing a set of maximally entangled bases (MEBs) in CdCd was established. Both cases when d is divisible by d and not divisible by d are discussed. We give two examples of maximally entangled bases in C2C4, which are mutually unbiased bases. Finally, we found a new example of an unextendible maximally entangled basis (UMEB) in C2C5.  相似文献   

5.
Private Information Retrieval (PIR) protocols, which allow the client to obtain data from servers without revealing its request, have many applications such as anonymous communication, media streaming, blockchain security, advertisement, etc. Multi-server PIR protocols, where the database is replicated among the non-colluding servers, provide high efficiency in the information-theoretic setting. Beimel et al. in CCC 12’ (further referred to as BIKO) put forward a paradigm for constructing multi-server PIR, capturing several previous constructions for k3 servers, as well as improving the best-known share complexity for 3-server PIR. A key component there is a share conversion scheme from corresponding linear three-party secret sharing schemes with respect to a certain type of “modified universal” relation. In a useful particular instantiation of the paradigm, they used a share conversion from (2,3)-CNF over Zm to three-additive sharing over Zpβ for primes p1,p2,p where p1p2 and m=p1·p2, and the relation is modified universal relation CSm. They reduced the question of the existence of the share conversion for a triple (p1,p2,p) to the (in)solvability of a certain linear system over Zp, and provided an efficient (in m,logp) construction of such a sharing scheme. Unfortunately, the size of the system is Θ(m2) which entails the infeasibility of a direct solution for big m’s in practice. Paskin-Cherniavsky and Schmerler in 2019 proved the existence of the conversion for the case of odd p1, p2 when p=p1, obtaining in this way infinitely many parameters for which the conversion exists, but also for infinitely many of them it remained open. In this work, using some algebraic techniques from the work of Paskin-Cherniavsky and Schmerler, we prove the existence of the conversion for even m’s in case p=2 (we computed β in this case) and the absence of the conversion for even m’s in case p>2. This does not improve the concrete efficiency of 3-server PIR; however, our result is promising in a broader context of constructing PIR through composition techniques with k3 servers, using the relation CSm where m has more than two prime divisors. Another our suggestion about 3-server PIR is that it’s possible to achieve a shorter server’s response using the relation CSm for extended SmSm. By computer search, in BIKO framework we found several such sets for small m’s which result in share conversion from (2,3)-CNF over Zm to 3-additive secret sharing over Zpβ, where β>0 is several times less than β, which implies several times shorter server’s response. We also suggest that such extended sets Sm can result in better PIR due to the potential existence of matching vector families with the higher Vapnik-Chervonenkis dimension.  相似文献   

6.
Detrended Fluctuation Analysis (DFA) has become a standard method to quantify the correlations and scaling properties of real-world complex time series. For a given scale of observation, DFA provides the function F(), which quantifies the fluctuations of the time series around the local trend, which is substracted (detrended). If the time series exhibits scaling properties, then F()α asymptotically, and the scaling exponent α is typically estimated as the slope of a linear fitting in the logF() vs. log() plot. In this way, α measures the strength of the correlations and characterizes the underlying dynamical system. However, in many cases, and especially in a physiological time series, the scaling behavior is different at short and long scales, resulting in logF() vs. log() plots with two different slopes, α1 at short scales and α2 at large scales of observation. These two exponents are usually associated with the existence of different mechanisms that work at distinct time scales acting on the underlying dynamical system. Here, however, and since the power-law behavior of F() is asymptotic, we question the use of α1 to characterize the correlations at short scales. To this end, we show first that, even for artificial time series with perfect scaling, i.e., with a single exponent α valid for all scales, DFA provides an α1 value that systematically overestimates the true exponent α. In addition, second, when artificial time series with two different scaling exponents at short and large scales are considered, the α1 value provided by DFA not only can severely underestimate or overestimate the true short-scale exponent, but also depends on the value of the large scale exponent. This behavior should prevent the use of α1 to describe the scaling properties at short scales: if DFA is used in two time series with the same scaling behavior at short scales but very different scaling properties at large scales, very different values of α1 will be obtained, although the short scale properties are identical. These artifacts may lead to wrong interpretations when analyzing real-world time series: on the one hand, for time series with truly perfect scaling, the spurious value of α1 could lead to wrongly thinking that there exists some specific mechanism acting only at short time scales in the dynamical system. On the other hand, for time series with true different scaling at short and large scales, the incorrect α1 value would not characterize properly the short scale behavior of the dynamical system.  相似文献   

7.
We study the viable Starobinsky f(R) dark energy model in spatially non-flat FLRW backgrounds, where f(R)=RλRch[1(1+R2/Rch2)1] with Rch and λ representing the characteristic curvature scale and model parameter, respectively. We modify CAMB and CosmoMC packages with the recent observational data to constrain Starobinsky f(R) gravity and the density parameter of curvature ΩK. In particular, we find the model and density parameters to be λ1<0.283 at 68% C.L. and ΩK=0.000990.0042+0.0044 at 95% C.L., respectively. The best χ2 fitting result shows that χf(R)2χΛCDM2, indicating that the viable f(R) gravity model is consistent with ΛCDM when ΩK is set as a free parameter. We also evaluate the values of AIC, BIC and DIC for the best fitting results of f(R) and ΛCDM models in the non-flat universe.  相似文献   

8.
We present a coupled variational autoencoder (VAE) method, which improves the accuracy and robustness of the model representation of handwritten numeral images. The improvement is measured in both increasing the likelihood of the reconstructed images and in reducing divergence between the posterior and a prior latent distribution. The new method weighs outlier samples with a higher penalty by generalizing the original evidence lower bound function using a coupled entropy function based on the principles of nonlinear statistical coupling. We evaluated the performance of the coupled VAE model using the Modified National Institute of Standards and Technology (MNIST) dataset and its corrupted modification C-MNIST. Histograms of the likelihood that the reconstruction matches the original image show that the coupled VAE improves the reconstruction and this improvement is more substantial when seeded with corrupted images. All five corruptions evaluated showed improvement. For instance, with the Gaussian corruption seed the accuracy improves by 1014 (from 1057.2 to 1042.9) and robustness improves by 1022 (from 10109.2 to 1087.0). Furthermore, the divergence between the posterior and prior distribution of the latent distribution is reduced. Thus, in contrast to the β-VAE design, the coupled VAE algorithm improves model representation, rather than trading off the performance of the reconstruction and latent distribution divergence.  相似文献   

9.
This paper studies the effect of quantum computers on Bitcoin mining. The shift in computational paradigm towards quantum computation allows the entire search space of the golden nonce to be queried at once by exploiting quantum superpositions and entanglement. Using Grover’s algorithm, a solution can be extracted in time O(2256/t), where t is the target value for the nonce. This is better using a square root over the classical search algorithm that requires O(2256/t) tries. If sufficiently large quantum computers are available for the public, mining activity in the classical sense becomes obsolete, as quantum computers always win. Without considering quantum noise, the size of the quantum computer needs to be 104 qubits.  相似文献   

10.
This article estimates several integral inequalities involving (hm)-convexity via the quantum calculus, through which Important integral inequalities including Simpson-like, midpoint-like, averaged midpoint-trapezoid-like and trapezoid-like are extended. We generalized some quantum integral inequalities for q-differentiable (hm)-convexity. Our results could serve as the refinement and the unification of some classical results existing in the literature by taking the limit q1.  相似文献   

11.
We use an m-vicinity method to examine Ising models on hypercube lattices of high dimensions d3. This method is applicable for both short-range and long-range interactions. We introduce a small parameter, which determines whether the method can be used when calculating the free energy. When we account for interaction with the nearest neighbors only, the value of this parameter depends on the dimension of the lattice d. We obtain an expression for the critical temperature in terms of the interaction constants that is in a good agreement with the results of computer simulations. For d=5,6,7, our theoretical estimates match the numerical results both qualitatively and quantitatively. For d=3,4, our method is sufficiently accurate for the calculation of the critical temperatures; however, it predicts a finite jump of the heat capacity at the critical point. In the case of the three-dimensional lattice (d=3), this contradicts the commonly accepted ideas of the type of the singularity at the critical point. For the four-dimensional lattice (d=4), the character of the singularity is under current discussion. For the dimensions d=1, 2 the m-vicinity method is not applicable.  相似文献   

12.
Recently, Savaré-Toscani proved that the Rényi entropy power of general probability densities solving the p-nonlinear heat equation in Rn is a concave function of time under certain conditions of three parameters n,p,μ , which extends Costa’s concavity inequality for Shannon’s entropy power to the Rényi entropy power. In this paper, we give a condition Φ(n,p,μ) of n,p,μ under which the concavity of the Rényi entropy power is valid. The condition Φ(n,p,μ) contains Savaré-Toscani’s condition as a special case and much more cases. Precisely, the points (n,p,μ) satisfying Savaré-Toscani’s condition consist of a two-dimensional subset of R3 , and the points satisfying the condition Φ(n,p,μ) consist a three-dimensional subset of R3 . Furthermore, Φ(n,p,μ) gives the necessary and sufficient condition in a certain sense. Finally, the conditions are obtained with a systematic approach.  相似文献   

13.
In this work, a finite element (FE) method is discussed for the 3D steady Navier–Stokes equations by using the finite element pair Xh×Mh. The method consists of transmitting the finite element solution (uh,ph) of the 3D steady Navier–Stokes equations into the finite element solution pairs (uhn,phn) based on the finite element space pair Xh×Mh of the 3D steady linearized Navier–Stokes equations by using the Stokes, Newton and Oseen iterative methods, where the finite element space pair Xh×Mh satisfies the discrete inf-sup condition in a 3D domain Ω. Here, we present the weak formulations of the FE method for solving the 3D steady Stokes, Newton and Oseen iterative equations, provide the existence and uniqueness of the FE solution (uhn,phn) of the 3D steady Stokes, Newton and Oseen iterative equations, and deduce the convergence with respect to (σ,h) of the FE solution (uhn,phn) to the exact solution (u,p) of the 3D steady Navier–Stokes equations in the H1L2 norm. Finally, we also give the convergence order with respect to (σ,h) of the FE velocity uhn to the exact velocity u of the 3D steady Navier–Stokes equations in the L2 norm.  相似文献   

14.
The discrepancy among one-electron and two-electron densities for diverse N-electron atomss, enclosing neutral systems (with nuclear charge Z=N) and charge-one ions (|NZ|=1), is quantified by means of mutual information, I, and Quantum Similarity Index, QSI, in the conjugate spaces position/momentum. These differences can be interpreted as a measure of the electron correlation of the system. The analysis is carried out by considering systems with a nuclear charge up to Z=103 and singly charged ions (cations and anions) as far as N=54. The interelectronic correlation, for any given system, is quantified through the comparison of its double-variable electron pair density and the product of the respective one-particle densities. An in-depth study along the Periodic Table reveals the importance, far beyond the weight of the systems considered, of their shell structure.  相似文献   

15.
Neural network quantum states (NQS) have been widely applied to spin-1/2 systems, where they have proven to be highly effective. The application to systems with larger on-site dimension, such as spin-1 or bosonic systems, has been explored less and predominantly using spin-1/2 Restricted Boltzmann Machines (RBMs) with a one-hot/unary encoding. Here, we propose a more direct generalization of RBMs for spin-1 that retains the key properties of the standard spin-1/2 RBM, specifically trivial product states representations, labeling freedom for the visible variables and gauge equivalence to the tensor network formulation. To test this new approach, we present variational Monte Carlo (VMC) calculations for the spin-1 anti-ferromagnetic Heisenberg (AFH) model and benchmark it against the one-hot/unary encoded RBM demonstrating that it achieves the same accuracy with substantially fewer variational parameters. Furthermore, we investigate how the hidden unit complexity of NQS depend on the local single-spin basis used. Exploiting the tensor network version of our RBM we construct an analytic NQS representation of the Affleck-Kennedy-Lieb-Tasaki (AKLT) state in the xyz spin-1 basis using only M=2N hidden units, compared to MO(N2) required in the Sz basis. Additional VMC calculations provide strong evidence that the AKLT state in fact possesses an exact compact NQS representation in the xyz basis with only M=N hidden units. These insights help to further unravel how to most effectively adapt the NQS framework for more complex quantum systems.  相似文献   

16.
In this paper, we establish new (p,q)κ1-integral and (p,q)κ2-integral identities. By employing these new identities, we establish new (p,q)κ1 and (p,q)κ2- trapezoidal integral-type inequalities through strongly convex and quasi-convex functions. Finally, some examples are given to illustrate the investigated results.  相似文献   

17.
We used the blast wave model with the Boltzmann–Gibbs statistics and analyzed the experimental data measured by the NA61/SHINE Collaboration in inelastic (INEL) proton–proton collisions at different rapidity slices at different center-of-mass energies. The particles used in this study were π+, π, K+, K, and p¯. We extracted the kinetic freeze-out temperature, transverse flow velocity, and kinetic freeze-out volume from the transverse momentum spectra of the particles. We observed that the kinetic freeze-out temperature is rapidity and energy dependent, while the transverse flow velocity does not depend on them. Furthermore, we observed that the kinetic freeze-out volume is energy dependent, but it remains constant with changing the rapidity. We also observed that all three parameters are mass dependent. In addition, with the increase of mass, the kinetic freeze-out temperature increases, and the transverse flow velocity, as well as kinetic freeze-out volume decrease.  相似文献   

18.
The aim of this paper is to show that α-limit sets in Lorenz maps do not have to be completely invariant. This highlights unexpected dynamical behavior in these maps, showing gaps existing in the literature. Similar result is obtained for unimodal maps on [0,1]. On the basis of provided examples, we also present how the performed study on the structure of α-limit sets is closely connected with the calculation of the topological entropy.  相似文献   

19.
An end-to-end joint source–channel (JSC) encoding matrix and a JSC decoding scheme using the proposed bit flipping check (BFC) algorithm and controversial variable node selection-based adaptive belief propagation (CVNS-ABP) decoding algorithm are presented to improve the efficiency and reliability of the joint source–channel coding (JSCC) scheme based on double Reed–Solomon (RS) codes. The constructed coding matrix can realize source compression and channel coding of multiple sets of information data simultaneously, which significantly improves the coding efficiency. The proposed BFC algorithm uses channel soft information to select and flip the unreliable bits and then uses the redundancy of the source block to realize the error verification and error correction. The proposed CVNS-ABP algorithm reduces the influence of error bits on decoding by selecting error variable nodes (VNs) from controversial VNs and adding them to the sparsity of the parity-check matrix. In addition, the proposed JSC decoding scheme based on the BFC algorithm and CVNS-ABP algorithm can realize the connection of source and channel to improve the performance of JSC decoding. Simulation results show that the proposed BFC-based hard-decision decoding (BFC-HDD) algorithm (ζ = 1) and BFC-based low-complexity chase (BFC-LCC) algorithm (ζ = 1, η = 3) can achieve about 0.23 dB and 0.46 dB of signal-to-noise ratio (SNR) defined gain over the prior-art decoding algorithm at a frame error rate (FER) = 101. Compared with the ABP algorithm, the proposed CVNS-ABP algorithm and BFC-CVNS-ABP algorithm achieve performance gains of 0.18 dB and 0.23 dB, respectively, at FER = 103.  相似文献   

20.
In this work, first, we consider novel parameterized identities for the left and right part of the (p,q)-analogue of Hermite–Hadamard inequality. Second, using these new parameterized identities, we give new parameterized (p,q)-trapezoid and parameterized (p,q)-midpoint type integral inequalities via η-quasiconvex function. By changing values of parameter μ[0,1], some new special cases from the main results are obtained and some known results are recaptured as well. Finally, at the end, an application to special means is given as well. This new research has the potential to establish new boundaries in comparative literature and some well-known implications. From an application perspective, the proposed research on the η-quasiconvex function has interesting results that illustrate the applicability and superiority of the results obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号