首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 639 毫秒
1.
This paper discussed the estimation of stress-strength reliability parameter R=P(Y<X) based on complete samples when the stress-strength are two independent Poisson half logistic random variables (PHLD). We have addressed the estimation of R in the general case and when the scale parameter is common. The classical and Bayesian estimation (BE) techniques of R are studied. The maximum likelihood estimator (MLE) and its asymptotic distributions are obtained; an approximate asymptotic confidence interval of R is computed using the asymptotic distribution. The non-parametric percentile bootstrap and student’s bootstrap confidence interval of R are discussed. The Bayes estimators of R are computed using a gamma prior and discussed under various loss functions such as the square error loss function (SEL), absolute error loss function (AEL), linear exponential error loss function (LINEX), generalized entropy error loss function (GEL) and maximum a posteriori (MAP). The Metropolis–Hastings algorithm is used to estimate the posterior distributions of the estimators of R. The highest posterior density (HPD) credible interval is constructed based on the SEL. Monte Carlo simulations are used to numerically analyze the performance of the MLE and Bayes estimators, the results were quite satisfactory based on their mean square error (MSE) and confidence interval. Finally, we used two real data studies to demonstrate the performance of the proposed estimation techniques in practice and to illustrate how PHLD is a good candidate in reliability studies.  相似文献   

2.
Quantifying uncertainty for parameter estimates obtained from matched-field geoacoustic inversions using a Bayesian approach requires estimation of the uncertainties in the data due to ambient noise as well as modeling errors. In this study, the variance parameter of the Gaussian error model, hereafter called error variance, is assumed to describe the data uncertainty. In practice, this parameter is not known a priori, and choosing a particular value is often difficult. Hence, to account for the uncertainty in error variance, several methods are introduced for implementing both the full and empirical Bayesian approaches. A full Bayesian approach that permits uncertainty of the error variance to propagate through the parameter estimation processes is a natural way of incorporating the uncertainty of error variance. Due to the large number of unknown parameters in the full Bayesian uncertainty analysis, an alternative, the empirical Bayesian approach, is developed, in which the posterior distributions of model parameters are conditioned on a point estimate of the error variance. Comparisons between the full and empirical Bayesian inferences of model parameters are presented using both synthetic and experimental data.  相似文献   

3.
We present a case study for Bayesian analysis and proper representation of distributions and dependence among parameters when calibrating process-oriented environmental models. A simple water quality model for the Elbe River (Germany) is referred to as an example, but the approach is applicable to a wide range of environmental models with time-series output. Model parameters are estimated by Bayesian inference via Markov Chain Monte Carlo (MCMC) sampling. While the best-fit solution matches usual least-squares model calibration (with a penalty term for excessive parameter values), the Bayesian approach has the advantage of yielding a joint probability distribution for parameters. This posterior distribution encompasses all possible parameter combinations that produce a simulation output that fits observed data within measurement and modeling uncertainty. Bayesian inference further permits the introduction of prior knowledge, e.g., positivity of certain parameters. The estimated distribution shows to which extent model parameters are controlled by observations through the process of inference, highlighting issues that cannot be settled unless more information becomes available. An interactive interface enables tracking for how ranges of parameter values that are consistent with observations change during the process of a step-by-step assignment of fixed parameter values. Based on an initial analysis of the posterior via an undirected Gaussian graphical model, a directed Bayesian network (BN) is constructed. The BN transparently conveys information on the interdependence of parameters after calibration. Finally, a strategy to reduce the number of expensive model runs in MCMC sampling for the presented purpose is introduced based on a newly developed variant of delayed acceptance sampling with a Gaussian process surrogate and linear dimensionality reduction to support function-valued outputs.  相似文献   

4.
5.
We propose an adaptive, two step strategy, for the estimation of mixed qubit states. We show that the strategy is optimal in a local minimax sense for the trace norm distance as well as other locally quadratic figures of merit. Local minimax optimality means that given n identical qubits, there exists no estimator which can perform better than the proposed estimator on a neighborhood of size n −1/2 of an arbitrary state. In particular, it is asymptotically Bayesian optimal for a large class of prior distributions. We present a physical implementation of the optimal estimation strategy based on continuous time measurements in a field that couples with the qubits. The crucial ingredient of the result is the concept of local asymptotic normality (or LAN) for qubits. This means that, for large n, the statistical model described by n identically prepared qubits is locally equivalent to a model with only a classical Gaussian distribution and a Gaussian state of a quantum harmonic oscillator. The term ‘local’ refers to a shrinking neighborhood around a fixed state ρ 0. An essential result is that the neighborhood radius can be chosen arbitrarily close to n −1/4. This allows us to use a two step procedure by which we first localize the state within a smaller neighborhood of radius n −1/2+ϵ, and then use LAN to perform optimal estimation.  相似文献   

6.
In a unified viewpoint in quantum channel estimation, we compare the Cramér-Rao and the mini-max approaches, which gives the Bayesian bound in the group covariant model. For this purpose, we introduce the local asymptotic mini-max bound, whose maximum is shown to be equal to the asymptotic limit of the mini-max bound. It is shown that the local asymptotic mini-max bound is strictly larger than the Cramér-Rao bound in the phase estimation case while both bounds coincide when the minimum mean square error decreases with the order O(\frac1n){O(\frac{1}{n})} . We also derive a sufficient condition so that the minimum mean square error decreases with the order O(\frac1n){O(\frac{1}{n})} .  相似文献   

7.
Entropy measures the uncertainty associated with a random variable. It has important applications in cybernetics, probability theory, astrophysics, life sciences and other fields. Recently, many authors focused on the estimation of entropy with different life distributions. However, the estimation of entropy for the generalized Bilal (GB) distribution has not yet been involved. In this paper, we consider the estimation of the entropy and the parameters with GB distribution based on adaptive Type-II progressive hybrid censored data. Maximum likelihood estimation of the entropy and the parameters are obtained using the Newton–Raphson iteration method. Bayesian estimations under different loss functions are provided with the help of Lindley’s approximation. The approximate confidence interval and the Bayesian credible interval of the parameters and entropy are obtained by using the delta and Markov chain Monte Carlo (MCMC) methods, respectively. Monte Carlo simulation studies are carried out to observe the performances of the different point and interval estimations. Finally, a real data set has been analyzed for illustrative purposes.  相似文献   

8.
Recently a new class of approximating coarse-grained stochastic processes and associated Monte Carlo algorithms were derived directly from microscopic stochastic lattice models for the adsorption/desorption and diffusion of interacting particles(12,13,15). The resulting hierarchy of stochastic processes is ordered by the level of coarsening in the space/time dimensions and describes mesoscopic scales while retaining a significant amount of microscopic detail on intermolecular forces and particle fluctuations. Here we rigorously compute in terms of specific relative entropy the information loss between non-equilibrium exact and approximating coarse-grained adsorption/desorption lattice dynamics. Our result is an error estimate analogous to rigorous error estimates for finite element/finite difference approximations of Partial Differential Equations. We prove this error to be small as long as the level of coarsening is small compared to the range of interaction of the microscopic model. This result gives a first mathematical reasoning for the parameter regimes for which approximating coarse-grained Monte Carlo algorithms are expected to give errors within a given tolerance. MSC (2000) subject classifications: 82C80; 60J22; 94A17  相似文献   

9.
Lévy processes have been widely used to model a large variety of stochastic processes under anomalous diffusion. In this note we show that Lévy processes play an important role in the study of the Generalized Langevin Equation (GLE). The solution to the GLE is proposed using stochastic integration in the sense of convergence in probability. Properties of the solution processes are obtained and numerical methods for stochastic integration are developed and applied to examples. Time series methods are applied to obtain estimation formulas for parameters related to the solution process. A Monte Carlo simulation study shows the estimation of the memory function parameter. We also estimate the stability index parameter when the noise is a Lévy process.  相似文献   

10.
The statistical inference of the reliability and parameters of the stress–strength model has received great attention in the field of reliability analysis. When following the generalized progressive hybrid censoring (GPHC) scheme, it is important to discuss the point estimate and interval estimate of the reliability of the multicomponent stress–strength (MSS) model, in which the stress and the strength variables are derived from different distributions by assuming that stress follows the Chen distribution and that strength follows the Gompertz distribution. In the present study, the Newton–Raphson method was adopted to derive the maximum likelihood estimation (MLE) of the model parameters, and the corresponding asymptotic distribution was adopted to construct the asymptotic confidence interval (ACI). Subsequently, the exact confidence interval (ECI) of the parameters was calculated. A hybrid Markov chain Monte Carlo (MCMC) method was adopted to determine the approximate Bayesian estimation (BE) of the unknown parameters and the high posterior density credible interval (HPDCI). A simulation study with the actual dataset was conducted for the BEs with squared error loss function (SELF) and the MLEs of the model parameters and reliability, comparing the bias and mean squares errors (MSE). In addition, the three interval estimates were compared in terms of the average interval length (AIL) and coverage probability (CP).  相似文献   

11.
In this paper, we study the statistical inference of the generalized inverted exponential distribution with the same scale parameter and various shape parameters based on joint progressively type-II censored data. The expectation maximization (EM) algorithm is applied to calculate the maximum likelihood estimates (MLEs) of the parameters. We obtain the observed information matrix based on the missing value principle. Interval estimations are computed by the bootstrap method. We provide Bayesian inference for the informative prior and the non-informative prior. The importance sampling technique is performed to derive the Bayesian estimates and credible intervals under the squared error loss function and the linex loss function, respectively. Eventually, we conduct the Monte Carlo simulation and real data analysis. Moreover, we consider the parameters that have order restrictions and provide the maximum likelihood estimates and Bayesian inference.  相似文献   

12.
13.
The application of Bayesian methods in cosmology and astrophysics has flourished over the past decade, spurred by data sets of increasing size and complexity. In many respects, Bayesian methods have proven to be vastly superior to more traditional statistical tools, offering the advantage of higher efficiency and of a consistent conceptual basis for dealing with the problem of induction in the presence of uncertainty. This trend is likely to continue in the future, when the way we collect, manipulate and analyse observations and compare them with theoretical models will assume an even more central role in cosmology.

This review is an introduction to Bayesian methods in cosmology and astrophysics and recent results in the field. I first present Bayesian probability theory and its conceptual underpinnings, Bayes' Theorem and the role of priors. I discuss the problem of parameter inference and its general solution, along with numerical techniques such as Monte Carlo Markov Chain methods. I then review the theory and application of Bayesian model comparison, discussing the notions of Bayesian evidence and effective model complexity, and how to compute and interpret those quantities. Recent developments in cosmological parameter extraction and Bayesian cosmological model building are summarised, highlighting the challenges that lie ahead.  相似文献   

14.
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.  相似文献   

15.
We analyse the simulation of strongly degenerate electrons at finite temperature using the recently introduced permutation blocking path integral Monte Carlo (PB‐PIMC) method [T. Dornheim et al., New J. Phys. 17 , 073017 (2015)]. As a representative example, we consider electrons in a harmonic confinement and carry out simulations for up to P = 2000 so‐called imaginary‐time propagators – an important convergence parameter within the PIMC formalism. This allows us to study the P‐dependence of different observables of the configuration space in the Monte Carlo simulations and of the fermion sign problem. We find a surprisingly persisting effect of the permutation blocking for large P, which is explained by comparing different length scales. Finally, we touch upon the uniform electron gas in the warm dense matter regime.  相似文献   

16.
The nonlinear relaxation exponent ΔMnl for the order parameter is obtained through a standard Monte Carlo study of the two-dimensional Ising model. We find that ΔMnl is about 2.1 as predicted by renormalization group results but larger than some earlier Monte Carlo estimates. For the linear relaxation exponent ΔM1 our data suggest a slightly higher value as expected by scaling laws.  相似文献   

17.
The phase transition in a three-dimensional array of classical anharmonic oscillators with harmonic nearest-neighbor coupling (discrete straight phi(4) model) is studied by Monte Carlo (MC) simulations and by analytical methods. The model allows us to choose a single dimensionless parameter a determining completely the behavior of the system. Changing a from 0 to +infinity allows to go continuously from the displacive to the order-disorder limit. We calculate the transition temperature T(c) and the temperature dependence of the order parameter down to T=0 for a wide range of the parameter a. The T(c) from MC calculations shows an excellent agreement with the known asymptotic values for small and large a. The obtained MC results are further compared with predictions of the mean-field and independent-mode approximations as well as with predictions of our own approximation scheme. In this approximation, we introduce an auxiliary system, which yields approximately the same temperature behavior of the order parameter, but allows the decoupling of the phonon modes. Our approximation gives the value of T(c) within an error of 5% and satisfactorily describes the temperature dependence of the order parameter for all values of a.  相似文献   

18.
We investigate the importance of local anharmonic vibrations of the bridging oxygen in the copper oxide high-T c materials in the context of superconductivity. For the numerical simulation we employ the projector quantum Monte Carlo method to study the ground state properties of the coupled electron-phonon system. The quantum Monte Carlo simulation allows an accurate treatment of electronic interactions which investigates the influence of strong correlations on superconductivity mediated by additional quantum degrees of freedom. As a generic model for such a system, we study the two-dimensional single band Hubbard model coupled to local pseudo spins (bridging oxygen), which mediate an effective attractive electron-electron interaction leading to superconductivity. The results are compared to those of an effective negativeU model.  相似文献   

19.
In the integer-valued generalized autoregressive conditional heteroscedastic (INGARCH) models, parameter estimation is conventionally based on the conditional maximum likelihood estimator (CMLE). However, because the CMLE is sensitive to outliers, we consider a robust estimation method for bivariate Poisson INGARCH models while using the minimum density power divergence estimator. We demonstrate the proposed estimator is consistent and asymptotically normal under certain regularity conditions. Monte Carlo simulations are conducted to evaluate the performance of the estimator in the presence of outliers. Finally, a real data analysis using monthly count series of crimes in New South Wales and an artificial data example are provided as an illustration.  相似文献   

20.
Modern computational models in supervised machine learning are often highly parameterized universal approximators. As such, the value of the parameters is unimportant, and only the out of sample performance is considered. On the other hand much of the literature on model estimation assumes that the parameters themselves have intrinsic value, and thus is concerned with bias and variance of parameter estimates, which may not have any simple relationship to out of sample model performance. Therefore, within supervised machine learning, heavy use is made of ridge regression (i.e., L2 regularization), which requires the the estimation of hyperparameters and can be rendered ineffective by certain model parameterizations. We introduce an objective function which we refer to as Information-Corrected Estimation (ICE) that reduces KL divergence based generalization error for supervised machine learning. ICE attempts to directly maximize a corrected likelihood function as an estimator of the KL divergence. Such an approach is proven, theoretically, to be effective for a wide class of models, with only mild regularity restrictions. Under finite sample sizes, this corrected estimation procedure is shown experimentally to lead to significant reduction in generalization error compared to maximum likelihood estimation and L2 regularization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号