首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A Bayesian tutorial for data assimilation   总被引:1,自引:0,他引:1  
Data assimilation is the process by which observational data are fused with scientific information. The Bayesian paradigm provides a coherent probabilistic approach for combining information, and thus is an appropriate framework for data assimilation. Viewing data assimilation as a problem in Bayesian statistics is not new. However, the field of Bayesian statistics is rapidly evolving and new approaches for model construction and sampling have been utilized recently in a wide variety of disciplines to combine information. This article includes a brief introduction to Bayesian methods. Paying particular attention to data assimilation, we review linkages to optimal interpolation, kriging, Kalman filtering, smoothing, and variational analysis. Discussion is provided concerning Monte Carlo methods for implementing Bayesian analysis, including importance sampling, particle filtering, ensemble Kalman filtering, and Markov chain Monte Carlo sampling. Finally, hierarchical Bayesian modeling is reviewed. We indicate how this approach can be used to incorporate significant physically based prior information into statistical models, thereby accounting for uncertainty. The approach is illustrated in a simplified advection–diffusion model.  相似文献   

2.
We present a reformulation of stochastic global optimization as a filtering problem. The motivation behind this reformulation comes from the fact that for many optimization problems we cannot evaluate exactly the objective function to be optimized. Similarly, we may not be able to evaluate exactly the functions involved in iterative optimization algorithms. For example, we may only have access to noisy measurements of the functions or statistical estimates provided through Monte Carlo sampling. This makes iterative optimization algorithms behave like stochastic maps. Naive global optimization amounts to evolving a collection of realizations of this stochastic map and picking the realization with the best properties. This motivates the use of filtering techniques to allow focusing on realizations that are more promising than others. In particular, we present a filtering reformulation of global optimization in terms of a special case of sequential importance sampling methods called particle filters. The increasing popularity of particle filters is based on the simplicity of their implementation and their flexibility. We utilize the flexibility of particle filters to construct a stochastic global optimization algorithm which can converge to the optimal solution appreciably faster than naive global optimization. Several examples of parametric exponential density estimation are provided to demonstrate the efficiency of the approach.  相似文献   

3.
Spreading processes on networks are often analyzed to understand how the outcome of the process (e.g. the number of affected nodes) depends on structural properties of the underlying network. Most available results are ensemble averages over certain interesting graph classes such as random graphs or graphs with a particular degree distributions. In this paper, we focus instead on determining the expected spreading size and the probability of large spreadings for a single (but arbitrary) given network and study the computational complexity of these problems using reductions from well-known network reliability problems. We show that computing both quantities exactly is intractable, but that the expected spreading size can be efficiently approximated with Monte Carlo sampling. When nodes are weighted to reflect their importance, the problem becomes as hard as the s-t reliability problem, which is not known to yield an efficient randomized approximation scheme up to now. Finally, we give a formal complexity-theoretic argument why there is most likely no randomized constant-factor approximation for the probability of large spreadings, even for the unweighted case. A hybrid Monte Carlo sampling algorithm is proposed that resorts to specialized s-t reliability algorithms for accurately estimating the infection probability of those nodes that are rarely affected by the spreading process.  相似文献   

4.
J. K. Brennan 《Molecular physics》2013,111(19):2647-2654
A methodology is presented to sample efficiently configurations of reacting mixtures in the reaction ensemble Monte Carlo simulation technique. A cavity-biasing scheme is used, which more effectively samples configurations than conventional random sampling. Akin to other biasing schemes that are implemented into insertion-based Monte Carlo methods such as Gibbs ensemble Monte Carlo, the method presented here searches for space in the reacting mixture whereby the insertion of a product molecule is energetically favoured. This sampling bias is then corrected in the acceptance criteria. The approach allows for the study of reacting mixtures at high density as well as for polyatomic molecular species. For some cases, the method is shown to increase the efficiency of the reaction steps by a factor greater than 20. The approach is shown to be readily generalized to other biasing schemes such as orientational-biasing of polar molecules and configurational-biasing of chain-like molecules.  相似文献   

5.
Phase equilibria of fluids with variable size polydispersity have been investigated by means of Monte Carlo simulations. In the models, spherical particles of different additive diameters interact through Lennard-Jones and hard sphere Yukawa intermolecular potentials and the underlying distribution of particle sizes is a Gaussian. The Gibbs ensemble Monte Carlo technique has been applied to determine the phase coexistence far below the critical temperature. Critical points have been estimated by finite-size scaling analysis using histogram reweighting for NpT simulation data. In order to achieve efficient sampling in the vicinity of the critical points, the hyper-parallel tempering scheme has been utilized.  相似文献   

6.
描述一个实用蒙特卡罗抽样库(MCSL).它可以提供:伪随机数产生的优良算法和程序;重要分布的最佳随机抽样方法和程序;在粒子输运问题中位置、能量、方向的常用分布的抽样程序.还配有一个专门的抽样库检验系统.它在微机上运行,安装简单,使用方便,有较强的实用性和可移植性.  相似文献   

7.
This paper develops a sequential trans-dimensional Monte Carlo algorithm for geoacoustic inversion in a strongly range-dependent environment. The algorithm applies advanced Markov chain Monte Carlo methods in combination with sequential techniques (particle filters) to carry out geoacoustic inversions for consecutive data sets acquired along a track. Changes in model parametrization along the track (e.g., number of sediment layers) are accounted for with trans-dimensional partition modeling, which intrinsically determines the amount of structure supported by the data information content. Challenging issues of rapid environmental change between consecutive data sets and high information content (peaked likelihood) are addressed by bridging distributions implemented using annealed importance sampling. This provides an efficient method to locate high-likelihood regions for new data which are distant and ∕ or disjoint from previous high-likelihood regions. The algorithm is applied to simulated reflection-coefficient data along a track, such as can be collected using a towed array close to the seabed. The simulated environment varies rapidly along the track, with changes in the number of layers, layer thicknesses, and geoacoustic parameters within layers. In addition, the seabed contains a geologic fault, where all layers are offset abruptly, and an erosional channel. Changes in noise level are also considered.  相似文献   

8.
Good performance with small ensemble filters applied to models with many state variables may require ‘localizing’ the impact of an observation to state variables that are ‘close’ to the observation. As a step in developing nearly generic ensemble filter assimilation systems, a method to estimate ‘localization’ functions is presented. Localization is viewed as a means to ameliorate sampling error when small ensembles are used to sample the statistical relation between an observation and a state variable. The impact of spurious sample correlations between an observation and model state variables is estimated using a ‘hierarchical ensemble filter’, where an ensemble of ensemble filters is used to detect sampling error. Hierarchical filters can adapt to a wide array of ensemble sizes and observational error characteristics with only limited heuristic tuning. Hierarchical filters can allow observations to efficiently impact state variables, even when the notion of ‘distance’ between the observation and the state variables cannot be easily defined. For instance, defining the distance between an observation of radar reflectivity from a particular radar and beam angle taken at 1133 GMT and a model temperature variable at 700 hPa 60 km north of the radar beam at 1200 GMT is challenging. The hierarchical filter estimates sampling error from a ‘group’ of ensembles and computes a factor between 0 and 1 to minimize sampling error. An a priori notion of distance is not required. Results are shown in both a low-order model and a simple atmospheric GCM. For low-order models, the hierarchical filter produces ‘localization’ functions that are very similar to those already described in the literature. When observations are more complex or taken at different times from the state specification (in ensemble smoothers for instance), the localization functions become increasingly distinct from those used previously. In the GCM, this complexity reaches a level that suggests that it would be difficult to define efficient localization functions a priori. There is a cost trade-off between running hierarchical filters or running a traditional filter with larger ensemble size. Hierarchical filters can be run for short training periods to develop localization statistics that can be used in a traditional ensemble filter to produce high quality assimilations at reasonable cost, even when the relation between observations and state variables is not well-known a priori. Additional research is needed to determine if it is ever cost-efficient to run hierarchical filters for large data assimilation problems instead of traditional filters with the corresponding total number of ensemble members.  相似文献   

9.
We present an application of kinetic Monte Carlo (kMC) in the canonical ensemble to a calculation of vapour liquid equilibrium and to describe the adsorption of argon on a flat graphite surface and in a slit-like graphitic pore. Simulations at 77 and 87.3?K accurately describe the experimental data. The kMC method is simple to implement and, unlike conventional Monte Carlo, no rejection trials are necessary. The only move is a uniform sampling of the volume space, which makes the determination of the chemical potential straightforward using real particles in the simulation, in the same spirit as the Widom inverse potential distribution. This avoids the need to freeze the real particles before the trial insertion of test particles as is necessary in other methods, such as the Widom method and its variants.  相似文献   

10.
The understanding of the origin of electronic noise would be very important in semiconductor devices. Detecting time characteristics via statistical approaches has been known to be useful in complex systems. In this study, the ensemble Monte Carlo particle method is used to simulate electron transport in a layered III-V semiconductor at room temperature. Nonlinear/erratic spiking fluctuations are predominant at the onset of current instabilities. To explore time characteristics detrended fluctuation analysis is used to analyze interspike intervals in different scales. Interestingly, multifractal behaviors are responsible for this kind of electronic noise. Therefore, it indicates that many extra time-characteristic would emerge in semiconductor devices, which would be strongly related to polar optical phonon scattering for intervalley transfer.  相似文献   

11.
对模拟粒子轨迹数较少模拟时间较短的蒙特卡罗粗糙剂量分布进行三维滤波,可以加速其收敛速度.结合蒙特卡罗剂量分布特征,改进三维高斯和Savitzky-Golay滤波器,建立三维混合滤波方法,并比较并联和级联两种基本混合方式.根据卷积性质,提出用等效卷积核简化混合滤波器结构的方法.结果表明,改进后的高斯和Savitzky-Golay滤波器的整体去噪效果得以增强,混合滤波器进一步降低滤波结果的局部误差,两种混合滤波器都能够大幅度抑制MC粗糙剂量分布中的噪声,级联混合滤波器降噪效果略优于并联混合滤波器.  相似文献   

12.
This paper introduces a recursive particle filtering algorithm designed to filter high dimensional systems with complicated non-linear and non-Gaussian effects. The method incorporates a parallel marginalization (PMMC) step in conjunction with the hybrid Monte Carlo (HMC) scheme to improve samples generated by standard particle filters. Parallel marginalization is an efficient Markov chain Monte Carlo (MCMC) strategy that uses lower dimensional approximate marginal distributions of the target distribution to accelerate equilibration. As a validation the algorithm is tested on a 2516 dimensional, bimodal, stochastic model motivated by the Kuroshio current that runs along the Japanese coast. The results of this test indicate that the method is an attractive alternative for problems that require the generality of a particle filter but have been inaccessible due to the limitations of standard particle filtering strategies.  相似文献   

13.
In this article we review recent developments in computational methods for quantum statistical lattice problems. We begin by giving the necessary mathematical basis, the generalized Trotter formula, and discuss the computational tools, exact summations and Monte Carlo simulation, that will be used to examine explicit examples. To illustrate the general strategy, the method is applied to an analytically solvable, non-trivial, model: the one-dimensional Ising model in a transverse field. Next it is shown how to generalized Trotter formula most naturally leads to different path-integral representations of the partition function by considering one-dimensional fermion lattice models. We show how to analyze the different representations and discuss Monte Carlo simulation results for one-dimensional fermions. Then Monte Carlo work on one- and two-dimensional spin-12 models based upon the Trotter formula approach is reviewed and the more dedicated Handscomb Monte Carlo method is discussed. We consider electron-phonon models and discuss Monte Carlo simulation data on the Molecular Crystal Model in one, two and three dimensions and related one-dimensional polaron models. Exact numerical results are presented for free fermions and free bosons in the canonical ensemble. We address the main problem of Monte Carlo simulations of fermions in more than one dimension: the cancellation of large contributions. Free bosons on a lattice are compared with bosons in a box and the effects of finite size on Bose-Einstein condensation are discussed.  相似文献   

14.
Asbstract By casting stochastic optimal estimation of time series in path integral form, one can apply analytical and computational techniques of equilibrium statistical mechanics. In particular, one can use standard or accelerated Monte Carlo methods for smoothing, filtering and/or prediction. Here we demonstrate the applicability and efficiency of generalized (nonlocal) hybrid Monte Carlo and multigrid methods applied to optimal estimation, specifically smoothing. We test these methods on a stochastic diffusion dynamics in a bistable potential. This particular problem has been chosen to illustrate the speedup due to the nonlocal sampling technique, and because there is an available optimal solution which can be used to validate the solution via the hybrid Monte Carlo strategy. In addition to showing that the nonlocal hybrid Monte Carlo is statistically accurate, we demonstrate a significant speedup compared with other strategies, thus making it a practical alternative to smoothing/filtering and data assimilation on problems with state vectors of fairly large dimensions, as well as a large total number of time steps.  相似文献   

15.
Based on the statistical dynamic mean-field theory, we investigate, in a generic model for a strongly coupled disordered electron–phonon system, the competition between polaron formation and Anderson localization. The statistical dynamic mean-field approximation maps the lattice problem to an ensemble of self-consistently embedded impurity problems. It is a probabilistic approach, focusing on the distribution instead of the average values for observables of interest. We solve the self-consistent equations of the theory with a Monte Carlo sampling technique, representing distributions for random variables by random samples, and discuss various ways to determine mobility edges from the random sample for the local Green function. Specifically, we give, as a function of the ‘polaron parameters’, such as adiabaticity and electron–phonon coupling constants, a detailed discussion of the localization properties of a single polaron, using a bare electron as a reference system.  相似文献   

16.
Described here is a path integral, sampling-based approach for data assimilation, of sequential data and evolutionary models. Since it makes no assumptions on linearity in the dynamics, or on Gaussianity in the statistics, it permits consideration of very general estimation problems. The method can be used for such tasks as computing a smoother solution, parameter estimation, and data/model initialization.Speedup in the Monte Carlo sampling process is essential if the path integral method has any chance of being a viable estimator on moderately large problems. Here a variety of strategies are proposed and compared for their relative ability to improve the sampling efficiency of the resulting estimator. Provided as well are details useful for its implementation and testing.The method is applied to a problem in which standard methods are known to fail, an idealized flow/drifter problem, which has been used as a testbed for assimilation strategies involving Lagrangian data. It is in this kind of context that the method may prove to be a useful assimilation tool in oceanic studies.  相似文献   

17.
粒子滤波是一种基于蒙特卡洛思想的非线性、非高斯滤波器,其一般采用重要性采样进行粒子采样。但重要性采样容易出现粒子退化现象。解决粒子样本退化问题一般采用重采样。重采样虽然解决了样本的退化问题,同时又引入了采样贫瘠问题。本文根据海洋混响的统计特性和混响中目标的恒虚警率检测原理,提出了恒虚警率采样粒子滤波技术,恒虚警率采样粒子滤波技术使采样粒子尽可能集中在目标附近,有效地描述目标后验概率,降低了粒子数,减小了计算量。本文将此技术应用到海洋混响中的声纳目标跟踪中,既解决了传统卡尔曼滤波在声纳目标跟踪中的非线性、非高斯问题,又解决了粒子滤波的粒子退化及采样贫瘠问题。文中对高分辨率声纳目标数据进行了滤波跟踪,结果验证了本文方法的有效性。  相似文献   

18.
This paper presents a new approach to the analysis of data on powder flow from electrical capacitance tomography (ECT) using probability modelling and Bayesian statistics. The methodology is illustrated for powder flow in a hopper. The purpose, and special features, of this approach is that ‘high‐level’ statistical Bayesian modelling combined with a Markov chain Monte Carlo (MCMC) sampling algorithm allows direct estimation of control parameters of industrial processes in contrast to usually applied ‘low‐level’, pixel‐based methods of data analysis. This enables reliable recognition of key process features in a quantitative manner. The main difficulty when investigating hopper flow with ECT is due to the need to measure small differences in particle packing density. The MCMC protocol enables more robust identification of the responses of such complex systems. This paper demonstrates the feasibility of the approach for a simple case of particulate material flow during discharging of a hopper. It is concluded that these approaches can offer significant advantages for the analysis and control of some industrial powder and other multi‐phase flow processes.  相似文献   

19.
We present a novel approach for improving particle filters for multi-target tracking. The suggested approach is based on drift homotopy for stochastic differential equations. Drift homotopy is used to design a Markov Chain Monte Carlo step which is appended to the particle filter and aims to bring the particle filter samples closer to the observations while at the same time respecting the target dynamics. We have used the proposed approach on the problem of multi-target tracking with a nonlinear observation model. The numerical results show that the suggested approach can improve significantly the performance of a particle filter.  相似文献   

20.
DAVID S. CORTI 《Molecular physics》2013,111(12):1887-1904
A modification of the widely used Monte Carlo method for determining thermophysical properties in the isothermal-isobaric ensemble is presented. The new Monte Carlo method, now consistent with recent derivations describing the proper statistical mechanical formulation of the constant pressure ensemble for small systems, requires a ‘shell’ molecule to uniquely identify the volume of the system, thereby avoiding the redundant counting of configurations. Ensemble averages obtained with the new algorithm differ from averages calculated with the old Monte Carlo method, particularly for small system sizes, although both sets of averages become equal in the thermodynamic limit. Monte Carlo simulations in the constant pressure ensemble applied to the Lennard-Jones fluid demonstrate these differences for small systems. Peculiarities of small systems are also discussed, revealing that ‘shape’ is an important thermodynamic variable. Finally, an extension of the Monte Carlo method to mixtures is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号