首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Operational statistics is an operational theory of probability and statistics which generalizes classical probability and statistics and provides a formalism particularly suited to the needs of quantum mechanics. Within this formalism, statistical inference can be accomplished using the Bayesian inference strategy. In a hierarchical Bayesian approach, a second-order probability measure, or credibility, represents degrees of belief in statistical hypotheses. A credibility determines an assignment of simple and conditioned betting rates to events in a natural way. In the setting of operational statistics, we show that a credibility is completely determined by the assignment of the betting rates it induces. This result suggests a certain unity between the Bayesian philosophy which deems that betting rates are central and the one which advocates the hierarchical approach.  相似文献   

2.
利用多种信息源的可靠性评估方法   总被引:6,自引:0,他引:6  
进行小样本可靠性评估的关键是充分利用由科学分析和专家经验得到的主观信息以及输入参数的基本试验信息.主观推断一般仅能得到可靠性的不完全先验信息,这些信息通常以可靠度均值或可信区间的形式存在.针对成败型及正态型试验,用最大熵原理将不完全信息转化为相应的共轭型先验信息,而输入量的试验信息通过功能函数和统计理论被转化成输出量的先验信息.先验信息和试验信息的融合则通过贝叶斯理论来实现.介绍了计算方法,通过算例分析了非试验信息对可靠度后验分布、试验次数以及可靠度评估结果的影响.  相似文献   

3.
4.
This paper is a review of a particular approach to the method of maximum entropy as a general framework for inference. The discussion emphasizes pragmatic elements in the derivation. An epistemic notion of information is defined in terms of its relation to the Bayesian beliefs of ideally rational agents. The method of updating from a prior to posterior probability distribution is designed through an eliminative induction process. The logarithmic relative entropy is singled out as a unique tool for updating (a) that is of universal applicability, (b) that recognizes the value of prior information, and (c) that recognizes the privileged role played by the notion of independence in science. The resulting framework—the ME method—can handle arbitrary priors and arbitrary constraints. It includes the MaxEnt and Bayes’ rules as special cases and, therefore, unifies entropic and Bayesian methods into a single general inference scheme. The ME method goes beyond the mere selection of a single posterior, and also addresses the question of how much less probable other distributions might be, which provides a direct bridge to the theories of fluctuations and large deviations.  相似文献   

5.
We present a case study for Bayesian analysis and proper representation of distributions and dependence among parameters when calibrating process-oriented environmental models. A simple water quality model for the Elbe River (Germany) is referred to as an example, but the approach is applicable to a wide range of environmental models with time-series output. Model parameters are estimated by Bayesian inference via Markov Chain Monte Carlo (MCMC) sampling. While the best-fit solution matches usual least-squares model calibration (with a penalty term for excessive parameter values), the Bayesian approach has the advantage of yielding a joint probability distribution for parameters. This posterior distribution encompasses all possible parameter combinations that produce a simulation output that fits observed data within measurement and modeling uncertainty. Bayesian inference further permits the introduction of prior knowledge, e.g., positivity of certain parameters. The estimated distribution shows to which extent model parameters are controlled by observations through the process of inference, highlighting issues that cannot be settled unless more information becomes available. An interactive interface enables tracking for how ranges of parameter values that are consistent with observations change during the process of a step-by-step assignment of fixed parameter values. Based on an initial analysis of the posterior via an undirected Gaussian graphical model, a directed Bayesian network (BN) is constructed. The BN transparently conveys information on the interdependence of parameters after calibration. Finally, a strategy to reduce the number of expensive model runs in MCMC sampling for the presented purpose is introduced based on a newly developed variant of delayed acceptance sampling with a Gaussian process surrogate and linear dimensionality reduction to support function-valued outputs.  相似文献   

6.
We present a new approach to Bayesian inference that entirely avoids Markov chain simulation, by constructing a map that pushes forward the prior measure to the posterior measure. Existence and uniqueness of a suitable measure-preserving map is established by formulating the problem in the context of optimal transport theory. We discuss various means of explicitly parameterizing the map and computing it efficiently through solution of an optimization problem, exploiting gradient information from the forward model when possible. The resulting algorithm overcomes many of the computational bottlenecks associated with Markov chain Monte Carlo. Advantages of a map-based representation of the posterior include analytical expressions for posterior moments and the ability to generate arbitrary numbers of independent posterior samples without additional likelihood evaluations or forward solves. The optimization approach also provides clear convergence criteria for posterior approximation and facilitates model selection through automatic evaluation of the marginal likelihood. We demonstrate the accuracy and efficiency of the approach on nonlinear inverse problems of varying dimension, involving the inference of parameters appearing in ordinary and partial differential equations.  相似文献   

7.
A tracking filter algorithm based on the maneuvering detection delay is presented in order to solve the fuzzy problem of target maneuver decision introduced by the measure?ment errors of active sonar. When the maneuvering detection is unclear, two target moving hypotheses, the uniform and the maneuver, derived from the method of multiple hypothesis tracking, are generated to delay the final decision time. Then the hypothesis test statistics is constructed by using the residual sequence. The active sonar?s tracking ability of unknown prior information targets is improved due to the modified sequential probability ratio test and the integration of the advantages of strong tracking filter and the Kalman filter. Simulation results show that the algorithm is able to not only track the uniform targets accurately, but also track the maneuvering targets steadily. The effectiveness of the algorithm for real underwater acoustic targets is further verified by the sea trial data processing results.  相似文献   

8.
The problem of inverse statistics (statistics of distances for which the signal fluctuations are larger than a certain threshold) in differentiable signals with power law spectrum, E(k) approximately k(-alpha), 3< or =alpha<5, is discussed. We show that for these signals, with random phases, exit-distance moments follow a bifractal distribution. We also investigate two dimensional turbulent flows in the direct cascade regime, which display a more complex behavior. We give numerical evidences that the inverse statistics of 2D turbulent flows is described by a multifractal probability distribution; i.e., the statistics of laminar events is not simply captured by the exponent alpha characterizing the spectrum.  相似文献   

9.
This investigation tackles the probabilistic parameter estimation problem involving the Arrhenius parameters for the rate coefficient of the chain branching reaction H + O2 → OH + O. This is achieved in a Bayesian inference framework that uses indirect data from the literature in the form of summary statistics by approximating the maximum entropy solution with the aid of approximate bayesian computation. The summary statistics include nominal values and uncertainty factors of the rate coefficient, obtained from shock-tube experiments performed at various initial temperatures. The Bayesian framework allows for the incorporation of uncertainty in the rate coefficient of a secondary reaction, namely OH + H2 → H2O + H, resulting in a consistent joint probability density on Arrhenius parameters for the two rate coefficients. It also allows for uncertainty quantification in numerical ignition predictions while conforming with the published summary statistics. The method relies on probabilistic reconstruction of the unreported data, OH concentration profiles from shock-tube experiments, along with the unknown Arrhenius parameters. The data inference is performed using a Markov chain Monte Carlo sampling procedure that relies on an efficient adaptive quadrature in estimating relevant integrals needed for data likelihood evaluations. For further efficiency gains, local Padé–Legendre approximants are used as surrogates for the time histories of OH concentration, alleviating the need for 0-D auto-ignition simulations. The reconstructed realisations of the missing data are used to provide a consensus joint posterior probability density on the unknown Arrhenius parameters via probabilistic pooling. Uncertainty quantification analysis is performed for stoichiometric hydrogen–air auto-ignition computations to explore the impact of uncertain parameter correlations on a range of quantities of interest.  相似文献   

10.
Inferring the value of a property of a large stochastic system is a difficult task when the number of samples is insufficient to reliably estimate the probability distribution. The Bayesian estimator of the property of interest requires the knowledge of the prior distribution, and in many situations, it is not clear which prior should be used. Several estimators have been developed so far in which the proposed prior us individually tailored for each property of interest; such is the case, for example, for the entropy, the amount of mutual information, or the correlation between pairs of variables. In this paper, we propose a general framework to select priors that is valid for arbitrary properties. We first demonstrate that only certain aspects of the prior distribution actually affect the inference process. We then expand the sought prior as a linear combination of a one-dimensional family of indexed priors, each of which is obtained through a maximum entropy approach with constrained mean values of the property under study. In many cases of interest, only one or very few components of the expansion turn out to contribute to the Bayesian estimator, so it is often valid to only keep a single component. The relevant component is selected by the data, so no handcrafted priors are required. We test the performance of this approximation with a few paradigmatic examples and show that it performs well in comparison to the ad-hoc methods previously proposed in the literature. Our method highlights the connection between Bayesian inference and equilibrium statistical mechanics, since the most relevant component of the expansion can be argued to be that with the right temperature.  相似文献   

11.
Variational inference is an optimization-based method for approximating the posterior distribution of the parameters in Bayesian probabilistic models. A key challenge of variational inference is to approximate the posterior with a distribution that is computationally tractable yet sufficiently expressive. We propose a novel method for generating samples from a highly flexible variational approximation. The method starts with a coarse initial approximation and generates samples by refining it in selected, local regions. This allows the samples to capture dependencies and multi-modality in the posterior, even when these are absent from the initial approximation. We demonstrate theoretically that our method always improves the quality of the approximation (as measured by the evidence lower bound). In experiments, our method consistently outperforms recent variational inference methods in terms of log-likelihood and ELBO across three example tasks: the Eight-Schools example (an inference task in a hierarchical model), training a ResNet-20 (Bayesian inference in a large neural network), and the Mushroom task (posterior sampling in a contextual bandit problem).  相似文献   

12.
The performances of Bayesian inference to predict the sequence of DNA molecules from fixed-force unzipping experiments are investigated. We show that the probability of misprediction decreases exponentially with the amount of collected data. The decay rate is calculated as a function of biochemical parameters (binding free energies), the sequence content, the applied force, the elastic properties of a DNA single strand, and time resolution.  相似文献   

13.
利用Bayesian-MCMC方法从雷达回波反演海洋波导   总被引:2,自引:0,他引:2       下载免费PDF全文
盛峥  黄思训  曾国栋 《物理学报》2009,58(6):4335-4341
应用贝叶斯-蒙特卡罗(Bayesian-MCMC)方法将海洋波导参数的先验信息描述为先验概率密度,结合雷达回波资料(电磁波传播损耗),得到待反演海洋波导参数的后验概率密度,用马尔可夫链蒙特卡罗(MCMC)-Gibbs采样器采样后验概率密度分布,并用样本最大似然估计值作为对海洋波导参数分布的估计.数值实验结果表明,该方法对先验信息进行了有效利用,反演精度高于遗传算法的反演精度.该方法较为充分利用先验信息,得到解的概率分布,即解的不确定性分析,这在实际应用中有一定的参考价值. 关键词: 波导 电磁波传播损耗 贝叶斯-蒙特卡罗 概率分布  相似文献   

14.
With the increasing number of connected devices, complex systems such as smart homes record a multitude of events of various types, magnitude and characteristics. Current systems struggle to identify which events can be considered more memorable than others. In contrast, humans are able to quickly categorize some events as being more “memorable” than others. They do so without relying on knowledge of the system’s inner working or large previous datasets. Having this ability would allow the system to: (i) identify and summarize a situation to the user by presenting only memorable events; (ii) suggest the most memorable events as possible hypotheses in an abductive inference process. Our proposal is to use Algorithmic Information Theory to define a “memorability” score by retrieving events using predicative filters. We use smart-home examples to illustrate how our theoretical approach can be implemented in practice.  相似文献   

15.
The data analysis carried out by the LIGO–Virgo collaboration on gravitational-wave events utilizes nested sampling to compute Bayesian evidences and posterior distributions for inferring the source properties of compact binaries. With poor sampling from the constrained prior, nested sampling algorithms may misbehave and fail to sample the posterior distribution faithfully. Fowlie et al. (2020) outlines a method of validating the performance of nested sampling, or identifying pathologies such as plateaus in the parameter space, using likelihood insertion order statistics. Here, this method is applied to nested sampling analyses of all events in the first and second gravitational wave transient catalogs (GWTC-1 and GWTC-2) of the LIGO–Virgo collaboration. The insertion order statistics are tested for uniformity across 45 events in the catalog and it is found that, with a few exceptions that have negligible effect on the final posteriors, the data from the analysis of events in the catalog is consistent with unbiased prior sampling. There is, however, weak evidence against uniformity at the catalog-level meta-test, yielding a Kolmogorov–Smirnov meta-p-value of .  相似文献   

16.
The BTW Abelian sandpile model is a prominent example of systems showing self-organised criticality (SOC) in the infinite size limit. We study finite-size effects with special focus on the statistics of extreme events, i.e., of particularly large avalanches. Not only the avalanche size probability distribution, but also the mutual independence of large avalanches in the critical state is affected by finite-size effects. Instead of a Poissonian recurrencetime distribution, in the finite system we find a repulsion of extreme events that depends on the avalanche size and not on the respective probability. The dependence of these effects on the system size is investigated and some data collapse is found. Our results imply that SOC is an unsuitable mechanism for the explanation of extreme events which occur in clusters.  相似文献   

17.
A change point is a location or time at which observations or data obey two different models: before and after. In real problems, we may know some prior information about the location of the change point, say at the right or left tail of the sequence. How does one incorporate the prior information into the current cumulative sum (CUSUM) statistics? We propose a new class of weighted CUSUM statistics with three different types of quadratic weights accounting for different prior positions of the change points. One interpretation of the weights is the mean duration in a random walk. Under the normal model with known variance, the exact distributions of these statistics are explicitly expressed in terms of eigenvalues. Theoretical results about the explicit difference of the distributions are valuable. The expansions of asymptotic distributions are compared with the expansion of the limit distributions of the Cramér-von Mises statistic and the Anderson and Darling statistic. We provide some extensions from independent normal responses to more interesting models, such as graphical models, the mixture of normals, Poisson, and weakly dependent models. Simulations suggest that the proposed test statistics have better power than the graph-based statistics. We illustrate their application to a detection problem with video data.  相似文献   

18.
提出了一种基于贝叶斯验后概率的序贯检验方法,建立了检验的判别准则,给出了判别准则临界值的计算方法。在给定截尾实验次数的条件下,提出了一种截尾方案,建立了截尾判断方法。将其用于解决验证效应物在微波作用下失效概率达到给定水平的微波效应实验设计问题,给出了相应的实验方案和所需样本量的估计。最后,通过实例对上述方法的应用过程进行了说明,并和现有方法进行了分析比较。  相似文献   

19.
提出了一种基于贝叶斯验后概率的序贯检验方法,建立了检验的判别准则,给出了判别准则临界值的计算方法。在给定截尾实验次数的条件下,提出了一种截尾方案,建立了截尾判断方法。将其用于解决验证效应物在微波作用下失效概率达到给定水平的微波效应实验设计问题,给出了相应的实验方案和所需样本量的估计。最后,通过实例对上述方法的应用过程进行了说明,并和现有方法进行了分析比较。  相似文献   

20.
Several recent works point out that the crowd of small unobservable earthquakes (with magnitudes below the detection threshold md) may play a significant and perhaps dominant role in triggering future seismicity. Using the ETAS branching model of triggered seismicity, we apply the formalism of generating probability functions to investigate how the statistical properties of observable earthquakes differ from the statistics of all events. The ETAS (epidemic-type aftershock sequence) model assumes that each earthquake can trigger other earthquakes (“aftershocks”). An aftershock sequence results in this model from the cascade of aftershocks of each past earthquake. The triggering efficiency of earthquakes is assumed to vanish below a lower magnitude limit m0, in order to ensure the convergence of the theory and may reflect the physics of state-and-velocity frictional rupture. We show that, to a good approximation, the statistical distribution of seismic rates of events with magnitudes above md generated by an ETAS model with branching ratio n is the same as that of events generated by another ETAS model with effective parameter n(md). Our present analysis thus confirms, for the full statistical (time-independent or large time-window approximation) properties, the results obtained previously by one of us and Werner, based solely on the average seismic rates (the first-order moment of the statistics). Our analysis also demonstrates that this correspondence is not exact, as there are small corrections which can be systematically calculated, in terms of additional contributions that can be mapped onto a different branching model. We also show that this approximate correspondence of the ETAS model onto itself obtained by changing m0 into md, and n into n(md) holds only with respect to its statistical properties and not for all its space-time properties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号