AN ACCELERATION STRATEGY FOR RANDOMIZE-THEN-OPTIMIZE SAMPLING VIA DEEP NEURAL NETWORKS |
| |
Authors: | Liang Yan Tao Zhou |
| |
Affiliation: | School of Mathematics,Southeast University,Nanjing 210096,China;Nanjing Center for Applied Mathematics,Nanjing 211135,China;LSEC,Institute of Computational Mathematics and Scientific/Engineering Computing,Academy of Mathematics and Systems Science,Chinese Academy of Sciences,Beijing 100190,China |
| |
Abstract: | Randomize-then-optimize (RTO) is widely used for sampling from posterior distribu-tions in Bayesian inverse problems.However,RTO can be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient.In this work,we present a novel goal-oriented deep neural networks (DNN) sur-rogate approach to substantially reduce the computation burden of RTO.In particular,we propose to drawn the training points for the DNN-surrogate from a local approximated posterior distribution-yielding a flexible and efficient sampling algorithm that converges to the direct RTO approach.We present a Bayesian inverse problem governed by elliptic PDEs to demonstrate the computational accuracy and efficiency of our DNN-RTO ap-proach,which shows that DNN-RTO can significantly outperform the traditional RTO. |
| |
Keywords: | Bayesian inverse problems Deep neural network Markov chain Monte Carlo |
本文献已被 万方数据 等数据库收录! |
| 点击此处可从《计算数学(英文版)》浏览原始摘要信息 |
|