Randomize-then-optimize (RTO) is widely used for sampling from posterior distributions in Bayesian inverse problems. However, RTO may be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient. In this work, we present a novel strategy to substantially reduce the computation burden of RTO by using a goal-oriented deep neural networks (DNN) surrogate approach. In particular, the training points for the DNN-surrogate are drawn from a local approximated posterior distribution, and it is shown that the resulting algorithm can provide a flexible and efficient sampling algorithm, which converges to the direct RTO approach. We present a Bayesian inverse problem governed by a benchmark elliptic PDE to demonstrate the computational accuracy and efficiency of our new algorithm (i.e., DNN-RTO). It is shown that with our algorithm, one can significantly outperform the traditional RTO.
翻译:由于对昂贵的前方模型及其梯度的重复评价,RTO在计算复杂问题时可能具有很强的复杂性问题。在这项工作中,我们提出了一个新的战略,通过采用面向目标的深神经网络替代方法,大幅度降低RTO的计算负担。特别是,DNN-Surrogate的训练点取自当地近似后方分布,并表明由此产生的算法能够提供灵活而有效的取样算法,与直接的RTO方法汇合。我们提出了一个由基准椭圆PDE管理的巴耶斯反向问题,以显示我们新算法(即DNN-RTO)的计算精确度和效率。 事实证明,我们的算法可以大大超过传统的RTO。