首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Asbstract By casting stochastic optimal estimation of time series in path integral form, one can apply analytical and computational techniques of equilibrium statistical mechanics. In particular, one can use standard or accelerated Monte Carlo methods for smoothing, filtering and/or prediction. Here we demonstrate the applicability and efficiency of generalized (nonlocal) hybrid Monte Carlo and multigrid methods applied to optimal estimation, specifically smoothing. We test these methods on a stochastic diffusion dynamics in a bistable potential. This particular problem has been chosen to illustrate the speedup due to the nonlocal sampling technique, and because there is an available optimal solution which can be used to validate the solution via the hybrid Monte Carlo strategy. In addition to showing that the nonlocal hybrid Monte Carlo is statistically accurate, we demonstrate a significant speedup compared with other strategies, thus making it a practical alternative to smoothing/filtering and data assimilation on problems with state vectors of fairly large dimensions, as well as a large total number of time steps.  相似文献   

2.
The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a graphics processing unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin–Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases.  相似文献   

3.
The viewpoint taken in this paper is that data assimilation is fundamentally a statistical problem and that this problem should be cast in a Bayesian framework. In the absence of model error, the correct solution to the data assimilation problem is to find the posterior distribution implied by this Bayesian setting. Methods for dealing with data assimilation should then be judged by their ability to probe this distribution. In this paper we propose a range of techniques for probing the posterior distribution, based around the Langevin equation; and we compare these new techniques with existing methods.

When the underlying dynamics is deterministic, the posterior distribution is on the space of initial conditions leading to a sampling problem over this space. When the underlying dynamics is stochastic the posterior distribution is on the space of continuous time paths. By writing down a density, and conditioning on observations, it is possible to define a range of Markov Chain Monte Carlo (MCMC) methods which sample from the desired posterior distribution, and thereby solve the data assimilation problem. The basic building-blocks for the MCMC methods that we concentrate on in this paper are Langevin equations which are ergodic and whose invariant measures give the desired distribution; in the case of path space sampling these are stochastic partial differential equations (SPDEs).

Two examples are given to show how data assimilation can be formulated in a Bayesian fashion. The first is weather prediction, and the second is Lagrangian data assimilation for oceanic velocity fields. Furthermore the relationship between the Bayesian approach outlined here and the commonly used Kalman filter based techniques, prevalent in practice, is discussed. Two simple pedagogical examples are studied to illustrate the application of Bayesian sampling to data assimilation concretely. Finally a range of open mathematical and computational issues, arising from the Bayesian approach, are outlined.  相似文献   


4.
Path integral simulations are now recognized as a useful tool to determine theoretically the structure of complex molecules at finite temperatures including quantum effects. In addition to statistical errors due to incomplete sampling, also systematic errors are inherent in this procedure because of the finite discretization of the path integral. Here, useful “back of the envelope” estimates to assess the systematic errors of bond-length distribution functions are introduced. These analytical estimates are tested for two small molecules, HD+ and H3 +, where quasi-exact benchmark data are available. The accuracy of the formulae is shown to be sufficient in order to allow for a reliable assessment of the quality of the discretization in a given simulation. The estimates will also be applicable in condensed phase path integral simulations, and the basic idea can be generalized to other observables than those presented. Received 13 September 1999 and Received in final form 18 November 1999  相似文献   

5.
The problem of variational data assimilation for a nonlinear evolution model is formulated as an optimal control problem to find the initial condition function. The data contain errors (observation and background errors), hence there will be errors in the optimal solution. For mildly nonlinear dynamics, the covariance matrix of the optimal solution error can often be approximated by the inverse Hessian of the cost functional. Here we focus on highly nonlinear dynamics, in which case this approximation may not be valid. The equation relating the optimal solution error and the errors of the input data is used to construct an approximation of the optimal solution error covariance. Two new methods for computing this covariance are presented: the fully nonlinear ensemble method with sampling error compensation and the ‘effective inverse Hessian’ method. The second method relies on the efficient computation of the inverse Hessian by the quasi-Newton BFGS method with preconditioning. Numerical examples are presented for the model governed by Burgers equation with a nonlinear viscous term.  相似文献   

6.
A Bayesian tutorial for data assimilation   总被引:1,自引:0,他引:1  
Data assimilation is the process by which observational data are fused with scientific information. The Bayesian paradigm provides a coherent probabilistic approach for combining information, and thus is an appropriate framework for data assimilation. Viewing data assimilation as a problem in Bayesian statistics is not new. However, the field of Bayesian statistics is rapidly evolving and new approaches for model construction and sampling have been utilized recently in a wide variety of disciplines to combine information. This article includes a brief introduction to Bayesian methods. Paying particular attention to data assimilation, we review linkages to optimal interpolation, kriging, Kalman filtering, smoothing, and variational analysis. Discussion is provided concerning Monte Carlo methods for implementing Bayesian analysis, including importance sampling, particle filtering, ensemble Kalman filtering, and Markov chain Monte Carlo sampling. Finally, hierarchical Bayesian modeling is reviewed. We indicate how this approach can be used to incorporate significant physically based prior information into statistical models, thereby accounting for uncertainty. The approach is illustrated in a simplified advection–diffusion model.  相似文献   

7.
The problem of oceanographic state estimation, by means of an ocean general circulation model (GCM) and a multitude of observations, is described and contrasted with the meteorological process of data assimilation. In practice, all such methods reduce, on the computer, to forms of least-squares. The global oceanographic problem is at the present time focussed primarily on smoothing, rather than forecasting, and the data types are unlike meteorological ones. As formulated in the consortium Estimating the Circulation and Climate of the Ocean (ECCO), an automatic differentiation tool is used to calculate the so-called adjoint code of the GCM, and the method of Lagrange multipliers used to render the problem one of unconstrained least-squares minimization. Major problems today lie less with the numerical algorithms (least-squares problems can be solved by many means) than with the issues of data and model error. Results of ongoing calculations covering the period of the World Ocean Circulation Experiment, and including among other data, satellite altimetry from TOPEX/POSEIDON, Jason-1, ERS- 1/2, ENVISAT, and GFO, a global array of profiling floats from the Argo program, and satellite gravity data from the GRACE mission, suggest that the solutions are now useful for scientific purposes. Both methodology and applications are developing in a number of different directions.  相似文献   

8.
It is shown that, starting from any existing Monte Carlo algorithm for estimation of a physical quantity A, it is possible to implement a simple additional procedure that simultaneously estimates the sensitivity of A to any problem parameter. The corresponding supplementary cost is very low as no additional random sampling is required. The principle is presented on a formal basis and simple radiative transfer examples are used for illustration.  相似文献   

9.
The standard method for calculating radiation momentum deposition in Monte Carlo simulations is the analog estimator, which tallies the change in a particle's momentum at each interaction with the matter. Unfortunately, the analog estimator can suffer from large amounts of statistical error. In this paper, we present three new non-analog techniques for estimating momentum deposition. Specifically, we use absorption, collision, and track-length estimators to evaluate a simple integral expression for momentum deposition that does not contain terms that can cause large amounts of statistical error in the analog scheme. We compare our new non-analog estimators to the analog estimator with a set of test problems that encompass a wide range of material properties and both isotropic and anisotropic scattering. In nearly all cases, the new non-analog estimators outperform the analog estimator. The track-length estimator consistently yields the highest performance gains, improving upon the analog-estimator figure of merit by factors of up to two orders of magnitude.  相似文献   

10.
曹小群  皇群博  刘柏年  朱孟斌  余意 《物理学报》2015,64(13):130502-130502
针对变分资料同化中目标泛函梯度计算精度不高且复杂等问题, 提出了一种基于对偶数理论的资料同化新方法, 主要优点是: 能避免复杂的伴随模式开发及其逆向积分, 只需在对偶数空间通过正向积分就能同时计算出目标泛函和梯度向量的值. 首先利用对偶数理论把梯度分析过程转换为对偶数空间中目标泛函计算过程, 简单、高效和高精度地获得梯度向量值; 其次结合典型的最优化方法, 给出了非线性物理系统资料同化问题的新求解算法; 最后对Lorenz 63混沌系统、包含开关的不可微物理模型和抛物型偏微分方程分别进行了资料同化数值实验, 结果表明: 新方法能有效和准确地估计出预报模式的初始条件或物理参数值.  相似文献   

11.
李树  邓力  田东风  李刚 《物理学报》2014,63(23):239501-239501
利用隐式蒙特卡罗方法模拟热辐射光子在物质中的输运过程时,物质辐射源粒子是需要细致处理的物理量.传统的物质辐射源粒子抽样方法是体平均抽样方法,对于大多数问题,这样处理不会带来大的偏差.但是对于一些辐射吸收截面大、单一网格内温差显著的问题,体平均抽样方法的计算结果偏差较大.分析了产生偏差原因,提出一种基于辐射能量密度分布的辐射源粒子空间位置抽样方法,并推导了相应的抽样公式以解决此类问题.数值实验表明,新方法计算结果明显优于原方法且与解析结果基本一致.  相似文献   

12.
固体氢的压缩行为   总被引:2,自引:1,他引:2       下载免费PDF全文
 同时考虑分子的平动与转动自由度,用等温等压系综的路径积分蒙特卡罗方法研究了固体氢的状态方程。在有实验数据的区域,计算结果同实验结果符合很好,在无实验数据的超高压区域,计算结果同实验的外推结果符合。为了定量研究零点运动,还计算了体系的能量。  相似文献   

13.
屏蔽计算中的深穿透问题一直是蒙特卡罗计算的一个难题,研究了一种发射点作为驿站的随机游动机制,推导了相应的自适应抽样方法。其主要优势在于,在蒙特卡罗方法求解粒子输运的同时,利用已经获得的信息,自适应地控制各次抽样数,不断完善计算进程。通过对碰撞点引进重要性函数,实现发射点作为驿站的重要性抽样,并结合自适应控制达到最佳抽样状态。数值结果表明:基于发射点作为驿站的自适应抽样方法,在一定程度上克服了深穿透计算中估计值偏低现象。相应的重要函数抽样方法获得了满意的结果。  相似文献   

14.
The uniform electron gas (UEG) is one of the key models for the understanding of warm dense matter—an exotic, highly compressed state of matter between solid and plasma phases. The difficulty in modelling the UEG arises from the need to simultaneously account for Coulomb correlations, quantum effects, and exchange effects, as well as finite temperature. The most accurate results so far were obtained from quantum Monte Carlo (QMC) simulations with a variety of representations. However, QMC for electrons is hampered by the fermion sign problem. Here, we present results from a novel fermionic propagator path integral Monte Carlo in the restricted grand canonical ensemble. The ab initio simulation results for the spin-resolved pair distribution functions and static structure factor are reported for two isotherms (T in the units of the Fermi temperature). Furthermore, we combine the results from the linear response theory in the Singwi-Tosi-Land-Sjölander scheme with the QMC data to remove finite-size errors in the interaction energy. We present a new corrected parametrization for the interaction energy and the exchange–correlation free energy in the thermodynamic limit, and benchmark our results against the restricted path integral Monte Carlo by Brown et al. [Phys. Rev. Lett. 110 , 146405 (2013)] and configuration path integral Monte Carlo/permutation-blocking path integral Monte Carlo by Dornheim et al. [Phys. Rev. Lett. 117 , 115701 (2016)].  相似文献   

15.
We analyse the simulation of strongly degenerate electrons at finite temperature using the recently introduced permutation blocking path integral Monte Carlo (PB‐PIMC) method [T. Dornheim et al., New J. Phys. 17 , 073017 (2015)]. As a representative example, we consider electrons in a harmonic confinement and carry out simulations for up to P = 2000 so‐called imaginary‐time propagators – an important convergence parameter within the PIMC formalism. This allows us to study the P‐dependence of different observables of the configuration space in the Monte Carlo simulations and of the fermion sign problem. We find a surprisingly persisting effect of the permutation blocking for large P, which is explained by comparing different length scales. Finally, we touch upon the uniform electron gas in the warm dense matter regime.  相似文献   

16.
A novel path integral Monte Carlo (PIMC) approach for correlated many‐particle systems with arbitrary pair interaction in continuous space at low temperatures is presented. It is based on a representation of the N ‐particle density operator in a basis of (anti‐)symmetrized N ‐particle states (configurations of occupation numbers). The path integral is transformed into a sum over trajectories with the same topology and, finally, the limit of M → ∞, where M is the number of high‐temperature factors, is analytically performed. This yields exact expressions for the thermodynamic quantities and allows to perform efficient simulations for fermions at low temperature and weak to moderate coupling. Our method is expected to be applicable to dense quantum plasmas in the regime of strong degeneracy where conventional PIMC fails due to the fermion sign problem (© 2011 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

17.
The problem of finding the shortest closed path connectingN randomly chosen points is one of the classicNp-complete problems. We show that the length of tour depends logarithmically on the cooling rateQ in a simulated Monte Carlo anneal. We speculate that this is a general property of allNp-complete problems.  相似文献   

18.
Shooter localization and estimation of bullet trajectory, caliber and speed have become essential tasks for example in peacekeeping and police assignments. A novel approach for such estimation and localization is presented in this paper, as a numerical estimation method is applied to the problem. Both simulated and recorded gunshot data are considered, as a known bullet shock wave model and detected firing sounds are utilized in creating a likelihood function corresponding to different bullet states. For this, a state-space model of the underlying dynamic system is developed, and a well-known optimization algorithm is used to find the global maximum of the evaluated function. Two different criteria are used to measure the likelihood values, namely the Generalized Cross Correlation (GCC) and the Mean-Squared Error (MSE). The achieved localization and estimation results are accurate and applicable when considering the usability of the method against hostile snipers. The shooter position and bullet state estimation errors vary between 2% and 10%, depending on the estimated parameter at stake.  相似文献   

19.
A Monte Carlo approach to radiative transfer in participating media is described and tested. It solves to a large extent the well known problem of Monte Carlo simulation of optically thick absorption configurations. The approach which is based on a net-exchange formulation and on adapted optical path sampling procedures is carefully designed to insure satisfactory convergence for all types of optical thicknesses. The need for such adapted algorithms is mainly related to the problem of gaseous line spectra representation in which extremely large ranges of optical thicknesses may be simultaneously encountered. The algorithm is tested against various band average computations for simple geometries using the Malkmus statistical narrow band model.  相似文献   

20.
梁铭辉  郑飞虎  安振连  张冶文 《物理学报》2016,65(7):77702-077702
热脉冲法是测量聚合物介质薄膜空间电荷分布的有效方法之一, 其数据的分析涉及第一类Fredholm积分方程, 只能采用合适的数值计算方法进行求解, 而Monte Carlo法是近年来提出的数值求解该方程的方法之一. 本文尝试使用Monte Carlo法在频域内实现热脉冲数据的分析, 通过一系列模拟计算讨论Monte Carlo法的分析效果. 计算结果表明: Monte Carlo法可实现对热脉冲法实验数据的有效分析, 提取被测薄膜内的电场分布, 而且计算的电场分布在整个样品厚度上都与真实分布较好地符合, 可有效地弥补尺度变换法只在样品表面附近获得较高准确度的缺陷. 该方法的局限性在于计算结果存在一定的振荡, 且在噪声和数据误差的影响下, 其准确性很大程度上依赖于奇异值分解过程中容差的选择, 在应用的方便程度方面还有待进一步提升.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号