首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到15条相似文献,搜索用时 15 毫秒
1.
In this paper, we study the statistical inference of the generalized inverted exponential distribution with the same scale parameter and various shape parameters based on joint progressively type-II censored data. The expectation maximization (EM) algorithm is applied to calculate the maximum likelihood estimates (MLEs) of the parameters. We obtain the observed information matrix based on the missing value principle. Interval estimations are computed by the bootstrap method. We provide Bayesian inference for the informative prior and the non-informative prior. The importance sampling technique is performed to derive the Bayesian estimates and credible intervals under the squared error loss function and the linex loss function, respectively. Eventually, we conduct the Monte Carlo simulation and real data analysis. Moreover, we consider the parameters that have order restrictions and provide the maximum likelihood estimates and Bayesian inference.  相似文献   

2.
Inverted exponentiated Rayleigh distribution is a widely used and important continuous lifetime distribution, which plays a key role in lifetime research. The joint progressively type-II censoring scheme is an effective method used in the quality evaluation of products from different assembly lines. In this paper, we study the statistical inference of inverted exponentiated Rayleigh distribution based on joint progressively type-II censored data. The likelihood function and maximum likelihood estimates are obtained firstly by adopting Expectation-Maximization algorithm. Then, we calculate the observed information matrix based on the missing value principle. Bootstrap-p and Bootstrap-t methods are applied to get confidence intervals. Bayesian approaches under square loss function and linex loss function are provided respectively to derive the estimates, during which the importance sampling method is introduced. Finally, the Monte Carlo simulation and real data analysis are performed for further study.  相似文献   

3.
In this article, a new one parameter survival model is proposed using the Kavya–Manoharan (KM) transformation family and the inverse length biased exponential (ILBE) distribution. Statistical properties are obtained: quantiles, moments, incomplete moments and moment generating function. Different types of entropies such as Rényi entropy, Tsallis entropy, Havrda and Charvat entropy and Arimoto entropy are computed. Different measures of extropy such as extropy, cumulative residual extropy and the negative cumulative residual extropy are computed. When the lifetime of the item under use is assumed to follow the Kavya–Manoharan inverse length biased exponential (KMILBE) distribution, the progressive-stress accelerated life tests are considered. Some estimating approaches, such as the maximum likelihood, maximum product of spacing, least squares, and weighted least square estimations, are taken into account while using progressive type-II censoring. Furthermore, interval estimation is accomplished by determining the parameters’ approximate confidence intervals. The performance of the estimation approaches is investigated using Monte Carlo simulation. The relevance and flexibility of the model are demonstrated using two real datasets. The distribution is very flexible, and it outperforms many known distributions such as the inverse length biased, the inverse Lindley model, the Lindley, the inverse exponential, the sine inverse exponential and the sine inverse Rayleigh model.  相似文献   

4.
5.
In this paper, the parameter estimation problem of a truncated normal distribution is discussed based on the generalized progressive hybrid censored data. The desired maximum likelihood estimates of unknown quantities are firstly derived through the Newton–Raphson algorithm and the expectation maximization algorithm. Based on the asymptotic normality of the maximum likelihood estimators, we develop the asymptotic confidence intervals. The percentile bootstrap method is also employed in the case of the small sample size. Further, the Bayes estimates are evaluated under various loss functions like squared error, general entropy, and linex loss functions. Tierney and Kadane approximation, as well as the importance sampling approach, is applied to obtain the Bayesian estimates under proper prior distributions. The associated Bayesian credible intervals are constructed in the meantime. Extensive numerical simulations are implemented to compare the performance of different estimation methods. Finally, an authentic example is analyzed to illustrate the inference approaches.  相似文献   

6.
The point and interval estimations for the unknown parameters of an exponentiated half-logistic distribution based on adaptive type II progressive censoring are obtained in this article. At the beginning, the maximum likelihood estimators are derived. Afterward, the observed and expected Fisher’s information matrix are obtained to construct the asymptotic confidence intervals. Meanwhile, the percentile bootstrap method and the bootstrap-t method are put forward for the establishment of confidence intervals. With respect to Bayesian estimation, the Lindley method is used under three different loss functions. The importance sampling method is also applied to calculate Bayesian estimates and construct corresponding highest posterior density (HPD) credible intervals. Finally, numerous simulation studies are conducted on the basis of Markov Chain Monte Carlo (MCMC) samples to contrast the performance of the estimations, and an authentic data set is analyzed for exemplifying intention.  相似文献   

7.
By calculating the Kullback–Leibler divergence between two probability measures belonging to different exponential families dominated by the same measure, we obtain a formula that generalizes the ordinary Fenchel–Young divergence. Inspired by this formula, we define the duo Fenchel–Young divergence and report a majorization condition on its pair of strictly convex generators, which guarantees that this divergence is always non-negative. The duo Fenchel–Young divergence is also equivalent to a duo Bregman divergence. We show how to use these duo divergences by calculating the Kullback–Leibler divergence between densities of truncated exponential families with nested supports, and report a formula for the Kullback–Leibler divergence between truncated normal distributions. Finally, we prove that the skewed Bhattacharyya distances between truncated exponential families amount to equivalent skewed duo Jensen divergences.  相似文献   

8.
The Chernoff information between two probability measures is a statistical divergence measuring their deviation defined as their maximally skewed Bhattacharyya distance. Although the Chernoff information was originally introduced for bounding the Bayes error in statistical hypothesis testing, the divergence found many other applications due to its empirical robustness property found in applications ranging from information fusion to quantum information. From the viewpoint of information theory, the Chernoff information can also be interpreted as a minmax symmetrization of the Kullback–Leibler divergence. In this paper, we first revisit the Chernoff information between two densities of a measurable Lebesgue space by considering the exponential families induced by their geometric mixtures: The so-called likelihood ratio exponential families. Second, we show how to (i) solve exactly the Chernoff information between any two univariate Gaussian distributions or get a closed-form formula using symbolic computing, (ii) report a closed-form formula of the Chernoff information of centered Gaussians with scaled covariance matrices and (iii) use a fast numerical scheme to approximate the Chernoff information between any two multivariate Gaussian distributions.  相似文献   

9.
Entropy measures the uncertainty associated with a random variable. It has important applications in cybernetics, probability theory, astrophysics, life sciences and other fields. Recently, many authors focused on the estimation of entropy with different life distributions. However, the estimation of entropy for the generalized Bilal (GB) distribution has not yet been involved. In this paper, we consider the estimation of the entropy and the parameters with GB distribution based on adaptive Type-II progressive hybrid censored data. Maximum likelihood estimation of the entropy and the parameters are obtained using the Newton–Raphson iteration method. Bayesian estimations under different loss functions are provided with the help of Lindley’s approximation. The approximate confidence interval and the Bayesian credible interval of the parameters and entropy are obtained by using the delta and Markov chain Monte Carlo (MCMC) methods, respectively. Monte Carlo simulation studies are carried out to observe the performances of the different point and interval estimations. Finally, a real data set has been analyzed for illustrative purposes.  相似文献   

10.
In this article, the “truncated-composed” scheme was applied to the Burr X distribution to motivate a new family of univariate continuous-type distributions, called the truncated Burr X generated family. It is mathematically simple and provides more modeling freedom for any parental distribution. Additional functionality is conferred on the probability density and hazard rate functions, improving their peak, asymmetry, tail, and flatness levels. These characteristics are represented analytically and graphically with three special distributions of the family derived from the exponential, Rayleigh, and Lindley distributions. Subsequently, we conducted asymptotic, first-order stochastic dominance, series expansion, Tsallis entropy, and moment studies. Useful risk measures were also investigated. The remainder of the study was devoted to the statistical use of the associated models. In particular, we developed an adapted maximum likelihood methodology aiming to efficiently estimate the model parameters. The special distribution extending the exponential distribution was applied as a statistical model to fit two sets of actuarial and financial data. It performed better than a wide variety of selected competing non-nested models. Numerical applications for risk measures are also given.  相似文献   

11.
Two Rényi-type generalizations of the Shannon cross-entropy, the Rényi cross-entropy and the Natural Rényi cross-entropy, were recently used as loss functions for the improved design of deep learning generative adversarial networks. In this work, we derive the Rényi and Natural Rényi differential cross-entropy measures in closed form for a wide class of common continuous distributions belonging to the exponential family, and we tabulate the results for ease of reference. We also summarise the Rényi-type cross-entropy rates between stationary Gaussian processes and between finite-alphabet time-invariant Markov sources.  相似文献   

12.
Detection and measurement of abrupt changes in a process can provide us with important tools for decision making in systems management. In particular, it can be utilised to predict the onset of a sudden event such as a rare, extreme event which causes the abrupt dynamical change in the system. Here, we investigate the prediction capability of information theory by focusing on how sensitive information-geometric theory (information length diagnostics) and entropy-based information theoretical method (information flow) are to abrupt changes. To this end, we utilise a non-autonomous Kramer equation by including a sudden perturbation to the system to mimic the onset of a sudden event and calculate time-dependent probability density functions (PDFs) and various statistical quantities with the help of numerical simulations. We show that information length diagnostics predict the onset of a sudden event better than the information flow. Furthermore, it is explicitly shown that the information flow like any other entropy-based measures has limitations in measuring perturbations which do not affect entropy.  相似文献   

13.
In this paper we recall, extend and compute some information measures for the concomitants of the generalized order statistics (GOS) from the Farlie–Gumbel–Morgenstern (FGM) family. We focus on two types of information measures: some related to Shannon entropy, and some related to Tsallis entropy. Among the information measures considered are residual and past entropies which are important in a reliability context.  相似文献   

14.
We generalize the Jensen-Shannon divergence and the Jensen-Shannon diversity index by considering a variational definition with respect to a generic mean, thereby extending the notion of Sibson’s information radius. The variational definition applies to any arbitrary distance and yields a new way to define a Jensen-Shannon symmetrization of distances. When the variational optimization is further constrained to belong to prescribed families of probability measures, we get relative Jensen-Shannon divergences and their equivalent Jensen-Shannon symmetrizations of distances that generalize the concept of information projections. Finally, we touch upon applications of these variational Jensen-Shannon divergences and diversity indices to clustering and quantization tasks of probability measures, including statistical mixtures.  相似文献   

15.
Time is a key element of consciousness as it includes multiple timescales from shorter to longer ones. This is reflected in our experience of various short-term phenomenal contents at discrete points in time as part of an ongoing, more continuous, and long-term ‘stream of consciousness’. Can Integrated Information Theory (IIT) account for this multitude of timescales of consciousness? According to the theory, the relevant spatiotemporal scale for consciousness is the one in which the system reaches the maximum cause-effect power; IIT currently predicts that experience occurs on the order of short timescales, namely, between 100 and 300 ms (theta and alpha frequency range). This can well account for the integration of single inputs into a particular phenomenal content. However, such short timescales leave open the temporal relation of specific phenomenal contents to others during the course of the ongoing time, that is, the stream of consciousness. For that purpose, we converge the IIT with the Temporo-spatial Theory of Consciousness (TTC), which, assuming a multitude of different timescales, can take into view the temporal integration of specific phenomenal contents with other phenomenal contents over time. On the neuronal side, this is detailed by considering those neuronal mechanisms driving the non-additive interaction of pre-stimulus activity with the input resulting in stimulus-related activity. Due to their non-additive interaction, the single input is not only integrated with others in the short-term timescales of 100–300 ms (alpha and theta frequencies) (as predicted by IIT) but, at the same time, also virtually expanded in its temporal (and spatial) features; this is related to the longer timescales (delta and slower frequencies) that are carried over from pre-stimulus to stimulus-related activity. Such a non-additive pre-stimulus-input interaction amounts to temporo-spatial expansion as a key mechanism of TTC for the constitution of phenomenal contents including their embedding or nesting within the ongoing temporal dynamic, i.e., the stream of consciousness. In conclusion, we propose converging the short-term integration of inputs postulated in IIT (100–300 ms as in the alpha and theta frequency range) with the longer timescales (in delta and slower frequencies) of temporo-spatial expansion in TTC.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号