首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we propose to quantitatively compare loss functions based on parameterized Tsallis–Havrda–Charvat entropy and classical Shannon entropy for the training of a deep network in the case of small datasets which are usually encountered in medical applications. Shannon cross-entropy is widely used as a loss function for most neural networks applied to the segmentation, classification and detection of images. Shannon entropy is a particular case of Tsallis–Havrda–Charvat entropy. In this work, we compare these two entropies through a medical application for predicting recurrence in patients with head–neck and lung cancers after treatment. Based on both CT images and patient information, a multitask deep neural network is proposed to perform a recurrence prediction task using cross-entropy as a loss function and an image reconstruction task. Tsallis–Havrda–Charvat cross-entropy is a parameterized cross-entropy with the parameter α . Shannon entropy is a particular case of Tsallis–Havrda–Charvat entropy for α=1 . The influence of this parameter on the final prediction results is studied. In this paper, the experiments are conducted on two datasets including in total 580 patients, of whom 434 suffered from head–neck cancers and 146 from lung cancers. The results show that Tsallis–Havrda–Charvat entropy can achieve better performance in terms of prediction accuracy with some values of α .  相似文献   

2.
Blaum–Roth Codes are binary maximum distance separable (MDS) array codes over the binary quotient ring F2[x]/(Mp(x)), where Mp(x)=1+x++xp1, and p is a prime number. Two existing all-erasure decoding methods for Blaum–Roth codes are the syndrome-based decoding method and the interpolation-based decoding method. In this paper, we propose a modified syndrome-based decoding method and a modified interpolation-based decoding method that have lower decoding complexity than the syndrome-based decoding method and the interpolation-based decoding method, respectively. Moreover, we present a fast decoding method for Blaum–Roth codes based on the LU decomposition of the Vandermonde matrix that has a lower decoding complexity than the two modified decoding methods for most of the parameters.  相似文献   

3.
In this work, a finite element (FE) method is discussed for the 3D steady Navier–Stokes equations by using the finite element pair Xh×Mh. The method consists of transmitting the finite element solution (uh,ph) of the 3D steady Navier–Stokes equations into the finite element solution pairs (uhn,phn) based on the finite element space pair Xh×Mh of the 3D steady linearized Navier–Stokes equations by using the Stokes, Newton and Oseen iterative methods, where the finite element space pair Xh×Mh satisfies the discrete inf-sup condition in a 3D domain Ω. Here, we present the weak formulations of the FE method for solving the 3D steady Stokes, Newton and Oseen iterative equations, provide the existence and uniqueness of the FE solution (uhn,phn) of the 3D steady Stokes, Newton and Oseen iterative equations, and deduce the convergence with respect to (σ,h) of the FE solution (uhn,phn) to the exact solution (u,p) of the 3D steady Navier–Stokes equations in the H1L2 norm. Finally, we also give the convergence order with respect to (σ,h) of the FE velocity uhn to the exact velocity u of the 3D steady Navier–Stokes equations in the L2 norm.  相似文献   

4.
The Calogero–Leyvraz Lagrangian framework, associated with the dynamics of a charged particle moving in a plane under the combined influence of a magnetic field as well as a frictional force, proposed by Calogero and Leyvraz, has some special features. It is endowed with a Shannon “entropic” type kinetic energy term. In this paper, we carry out the constructions of the 2D Lotka–Volterra replicator equations and the N=2 Relativistic Toda lattice systems using this class of Lagrangians. We take advantage of the special structure of the kinetic term and deform the kinetic energy term of the Calogero–Leyvraz Lagrangians using the κ-deformed logarithm as proposed by Kaniadakis and Tsallis. This method yields the new construction of the κ-deformed Lotka–Volterra replicator and relativistic Toda lattice equations.  相似文献   

5.
Kullback–Leibler divergence KL(p,q) is the standard measure of error when we have a true probability distribution p which is approximate with probability distribution q. Its efficient computation is essential in many tasks, as in approximate computation or as a measure of error when learning a probability. In high dimensional probabilities, as the ones associated with Bayesian networks, a direct computation can be unfeasible. This paper considers the case of efficiently computing the Kullback–Leibler divergence of two probability distributions, each one of them coming from a different Bayesian network, which might have different structures. The paper is based on an auxiliary deletion algorithm to compute the necessary marginal distributions, but using a cache of operations with potentials in order to reuse past computations whenever they are necessary. The algorithms are tested with Bayesian networks from the bnlearn repository. Computer code in Python is provided taking as basis pgmpy, a library for working with probabilistic graphical models.  相似文献   

6.
The transverse momentum spectra of different types of particles, π±, K±, p and p¯, produced at mid-(pseudo)rapidity in different centrality lead–lead (Pb–Pb) collisions at 2.76 TeV; proton–lead (p–Pb) collisions at 5.02 TeV; xenon–xenon (Xe–Xe) collisions at 5.44 TeV; and proton–proton (pp) collisions at 0.9, 2.76, 5.02, 7 and 13 TeV, were analyzed by the blast-wave model with fluctuations. With the experimental data measured by the ALICE and CMS Collaborations at the Large Hadron Collider (LHC), the kinetic freeze-out temperature, transverse flow velocity and proper time were extracted from fitting the transverse momentum spectra. In nucleus–nucleus (A–A) and proton–nucleus (p–A) collisions, the three parameters decrease with the decrease of event centrality from central to peripheral, indicating higher degrees of excitation, quicker expansion velocities and longer evolution times for central collisions. In pp collisions, the kinetic freeze-out temperature is nearly invariant with the increase of energy, though the transverse flow velocity and proper time increase slightly, in the considered energy range.  相似文献   

7.
Detection of faults at the incipient stage is critical to improving the availability and continuity of satellite services. The application of a local optimum projection vector and the Kullback–Leibler (KL) divergence can improve the detection rate of incipient faults. However, this suffers from the problem of high time complexity. We propose decomposing the KL divergence in the original optimization model and applying the property of the generalized Rayleigh quotient to reduce time complexity. Additionally, we establish two distribution models for subfunctions F1(w) and F3(w) to detect the slight anomalous behavior of the mean and covariance. The effectiveness of the proposed method was verified through a numerical simulation case and a real satellite fault case. The results demonstrate the advantages of low computational complexity and high sensitivity to incipient faults.  相似文献   

8.
In several applications, the assumption of normality is often violated in data with some level of skewness, so skewness affects the mean’s estimation. The class of skew–normal distributions is considered, given their flexibility for modeling data with asymmetry parameter. In this paper, we considered two location parameter (μ) estimation methods in the skew–normal setting, where the coefficient of variation and the skewness parameter are known. Specifically, the least square estimator (LSE) and the best unbiased estimator (BUE) for μ are considered. The properties for BUE (which dominates LSE) using classic theorems of information theory are explored, which provides a way to measure the uncertainty of location parameter estimations. Specifically, inequalities based on convexity property enable obtaining lower and upper bounds for differential entropy and Fisher information. Some simulations illustrate the behavior of differential entropy and Fisher information bounds.  相似文献   

9.
Uniform error estimates with power-type asymptotic constants of the finite element method for the unsteady Navier–Stokes equations are deduced in this paper. By introducing an iterative scheme and studying its convergence, we firstly derive that the solution of the Navier–Stokes equations is bounded by power-type constants, where we avoid applying the Gronwall lemma, which generates exponential-type factors. Then, the technique is extended to the error estimate of the long-time finite element approximation. The analyses show that, under some assumptions on the given data, the asymptotic constants in the finite element error estimates for the unsteady Navier–Stokes equations are uniformly power functions with respect to the initial data, the viscosity, and the body force for all time t>0. Finally, some numerical examples are shown to verify the theoretical predictions.  相似文献   

10.
We address the inverse Frobenius–Perron problem: given a prescribed target distribution ρ, find a deterministic map M such that iterations of M tend to ρ in distribution. We show that all solutions may be written in terms of a factorization that combines the forward and inverse Rosenblatt transformations with a uniform map; that is, a map under which the uniform distribution on the d-dimensional hypercube is invariant. Indeed, every solution is equivalent to the choice of a uniform map. We motivate this factorization via one-dimensional examples, and then use the factorization to present solutions in one and two dimensions induced by a range of uniform maps.  相似文献   

11.
This present work explores the performance of a thermal–magnetic engine of Otto type, considering as a working substance an effective interacting spin model corresponding to the q state clock model. We obtain all the thermodynamic quantities for the q = 2, 4, 6, and 8 cases in a small lattice size (3×3 with free boundary conditions) by using the exact partition function calculated from the energies of all the accessible microstates of the system. The extension to bigger lattices was performed using the mean-field approximation. Our results indicate that the total work extraction of the cycle is highest for the q=4 case, while the performance for the Ising model (q=2) is the lowest of all cases studied. These results are strongly linked with the phase diagram of the working substance and the location of the cycle in the different magnetic phases present, where we find that the transition from a ferromagnetic to a paramagnetic phase extracts more work than one of the Berezinskii–Kosterlitz–Thouless to paramagnetic type. Additionally, as the size of the lattice increases, the extraction work is lower than smaller lattices for all values of q presented in this study.  相似文献   

12.
Among various modifications of the permutation entropy defined as the Shannon entropy of the ordinal pattern distribution underlying a system, a variant based on Rényi entropies was considered in a few papers. This paper discusses the relatively new concept of Rényi permutation entropies in dependence of non-negative real number q parameterizing the family of Rényi entropies and providing the Shannon entropy for q=1. Its relationship to Kolmogorov–Sinai entropy and, for q=2, to the recently introduced symbolic correlation integral are touched.  相似文献   

13.
This work is devoted to deriving the entropy of a single photon in a beam of light from first principles. Based on the quantum processes of light–matter interaction, we find that, if the light is not in equilibrium, there are two different ways, depending on whether the photon is being added or being removed from the light, of defining the single-photon entropy of this light. However, when the light is in equilibrium at temperature T, the two definitions are equivalent and the photon entropy of this light is hν/T. From first principles, we also re-derive the Jüttner velocity distribution showing that, even without interatomic collisions, two-level atoms will relax to the state satisfying the Maxwell–Jüttner velocity distribution when they are moving in blackbody radiation fields.  相似文献   

14.
As known, a method to introduce non-conventional statistics may be realized by modifying the number of possible combinations to put particles in a collection of single-particle states. In this paper, we assume that the weight factor of the possible configurations of a system of interacting particles can be obtained by generalizing opportunely the combinatorics, according to a certain analytical function f{π}(n) of the actual number of particles present in every energy level. Following this approach, the configurational Boltzmann entropy is revisited in a very general manner starting from a continuous deformation of the multinomial coefficients depending on a set of deformation parameters {π}. It is shown that, when f{π}(n) is related to the solutions of a simple linear difference–differential equation, the emerging entropy is a scaled version, in the occupational number representation, of the entropy of degree (κ,r) known, in the framework of the information theory, as Sharma–Taneja–Mittal entropic form.  相似文献   

15.
This study deals with drift parameters estimation problems in the sub-fractional Vasicek process given by dxt=θ(μxt)dt+dStH, with θ>0, μR being unknown and t0; here, SH represents a sub-fractional Brownian motion (sfBm). We introduce new estimators θ^ for θ and μ^ for μ based on discrete time observations and use techniques from Nordin–Peccati analysis. For the proposed estimators θ^ and μ^, strong consistency and the asymptotic normality were established by employing the properties of SH. Moreover, we provide numerical simulations for sfBm and related Vasicek-type process with different values of the Hurst index H.  相似文献   

16.
The parameters revealing the collective behavior of hadronic matter extracted from the transverse momentum spectra of π+, π, K+, K, p, p¯, Ks0, Λ, Λ¯, Ξ or Ξ, Ξ¯+ and Ω or Ω¯+ or Ω+Ω¯ produced in the most central and most peripheral gold–gold (AuAu), copper–copper (CuCu) and lead–lead (PbPb) collisions at 62.4 GeV, 200 GeV and 2760 GeV, respectively, are reported. In addition to studying the nucleus–nucleus (AA) collisions, we analyzed the particles mentioned above produced in pp collisions at the same center of mass energies (62.4 GeV, 200 GeV and 2760 GeV) to compare with the most peripheral AA collisions. We used the Tsallis–Pareto type function to extract the effective temperature from the transverse momentum spectra of the particles. The effective temperature is slightly larger in a central collision than in a peripheral collision and is mass-dependent. The mean transverse momentum and the multiplicity parameter (N0) are extracted and have the same result as the effective temperature. All three extracted parameters in pp collisions are closer to the peripheral AA collisions at the same center of mass energy, revealing that the extracted parameters have the same thermodynamic nature. Furthermore, we report that the mean transverse momentum in the PbPb collision is larger than that of the AuAu and CuCu collisions. At the same time, the latter two are nearly equal, which shows their comparatively strong dependence on energy and weak dependence on the size of the system. The multiplicity parameter, N0 in central AA, depends on the interacting system’s size and is larger for the bigger system.  相似文献   

17.
This paper introduces a closed-form expression for the Kullback–Leibler divergence (KLD) between two central multivariate Cauchy distributions (MCDs) which have been recently used in different signal and image processing applications where non-Gaussian models are needed. In this overview, the MCDs are surveyed and some new results and properties are derived and discussed for the KLD. In addition, the KLD for MCDs is showed to be written as a function of Lauricella D-hypergeometric series FD(p). Finally, a comparison is made between the Monte Carlo sampling method to approximate the KLD and the numerical value of the closed-form expression of the latter. The approximation of the KLD by Monte Carlo sampling method are shown to converge to its theoretical value when the number of samples goes to the infinity.  相似文献   

18.
Originally, the Carnot cycle was a theoretical thermodynamic cycle that provided an upper limit on the efficiency that any classical thermodynamic engine can achieve during the conversion of heat into work, or conversely, the efficiency of a refrigeration system in creating a temperature difference by the application of work to the system. The first aim of this paper is to introduce and study the economic Carnot cycles concerning Roegenian economics, using our thermodynamic–economic dictionary. These cycles are described in both a QP diagram and a EI diagram. An economic Carnot cycle has a maximum efficiency for a reversible economic “engine”. Three problems together with their solutions clarify the meaning of the economic Carnot cycle, in our context. Then we transform the ideal gas theory into the ideal income theory. The second aim is to analyze the economic Van der Waals equation, showing that the diffeomorphic-invariant information about the Van der Waals surface can be obtained by examining a cuspidal potential.  相似文献   

19.
This paper addresses the problem of robust angle of arrival (AOA) target localization in the presence of uniformly distributed noise which is modeled as the mixture of Laplacian distribution and uniform distribution. Motivated by the distribution of noise, we develop a localization model by using the p-norm with 0p<2 as the measurement error and the 1-norm as the regularization term. Then, an estimator for introducing the proximal operator into the framework of the alternating direction method of multipliers (POADMM) is derived to solve the convex optimization problem when 1p<2. However, when 0p<1, the corresponding optimization problem is nonconvex and nonsmoothed. To derive a convergent method for this nonconvex and nonsmooth target localization problem, we propose a smoothed POADMM estimator (SPOADMM) by introducing the smoothing strategy into the optimization model. Eventually, the proposed algorithms are compared with some state-of-the-art robust algorithms via numerical simulations, and their effectiveness in uniformly distributed noise is discussed from the perspective of root-mean-squared error (RMSE). The experimental results verify that the proposed method has more robustness against outliers and is less sensitive to the selected parameters, especially the variance of the measurement noise.  相似文献   

20.
A unipolar electrohydrodynamic (UP-EHD) pump flow is studied with known electric potential at the emitter and zero electric potential at the collector. The model is designed for electric potential, charge density, and electric field. The dimensionless parameters, namely the electrical source number (Es), the electrical Reynolds number (ReE), and electrical slip number (Esl), are considered with wide ranges of variation to analyze the UP-EHD pump flow. To interpret the pump flow of the UP-EHD model, a hybrid metaheuristic solver is designed, consisting of the recently developed technique sine–cosine algorithm (SCA) and sequential quadratic programming (SQP) under the influence of an artificial neural network. The method is abbreviated as ANN-SCA-SQP. The superiority of the technique is shown by comparing the solution with reference solutions. For a large data set, the technique is executed for one hundred independent experiments. The performance is evaluated through performance operators and convergence plots.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号