We introduce a framework for Bayesian experimental design (BED) with implicit models, where the data-generating distribution is intractable but sampling from it is still possible. In order to find optimal experimental designs for such models, our approach maximises mutual information lower bounds that are parametrised by neural networks. By training a neural network on sampled data, we simultaneously update network parameters and designs using stochastic gradient-ascent. The framework enables experimental design with a variety of prominent lower bounds and can be applied to a wide range of scientific tasks, such as parameter estimation, model discrimination and improving future predictions. Using a set of intractable toy models, we provide a comprehensive empirical comparison of prominent lower bounds applied to the aforementioned tasks. We further validate our framework on a challenging system of stochastic differential equations from epidemiology.
翻译:我们引入了一种含有隐含模型的贝叶斯实验设计框架(BED),其中数据生成分布十分棘手,但仍有可能从中取样。为了找到这类模型的最佳实验设计,我们的方法最大限度地扩大了神经网络所覆盖的相互信息下限范围。通过对神经网络进行抽样数据培训,我们同时更新网络参数和使用随机梯度测量设计。这个框架使实验设计具有各种显著的较低范围,可以应用于一系列广泛的科学任务,例如参数估计、模型区分以及改进未来预测。我们利用一套棘手的玩具模型,对适用于上述任务的显著较低范围进行了全面的经验比较。我们进一步验证了我们关于具有挑战性的流行病学差异分析公式体系的框架。