Integrating physics models within machine learning models holds considerable promise toward learning robust models with improved interpretability and abilities to extrapolate. In this work, we focus on the integration of incomplete physics models into deep generative models. In particular, we introduce an architecture of variational autoencoders (VAEs) in which a part of the latent space is grounded by physics. A key technical challenge is to strike a balance between the incomplete physics and trainable components such as neural networks for ensuring that the physics part is used in a meaningful manner. To this end, we propose a regularized learning method that controls the effect of the trainable components and preserves the semantics of the physics-based latent variables as intended. We not only demonstrate generative performance improvements over a set of synthetic and real-world datasets, but we also show that we learn robust models that can consistently extrapolate beyond the training distribution in a meaningful manner. Moreover, we show that we can control the generative process in an interpretable manner.
翻译:将物理模型纳入机器学习模型中,对于学习强健模型,提高解释性和外推能力,有着相当大的希望。在这项工作中,我们侧重于将不完全物理模型纳入深层基因模型。特别是,我们引入了一种变异自动电解器(VAEs)结构,其中一部分潜伏空间以物理为基础。一个关键的技术挑战是在不完全物理和神经网络等可训练部件之间取得平衡,以确保物理部分得到有意义的使用。为此,我们提议一种常规化学习方法,控制可训练部件的效果,并按预期保存基于物理的潜在变量的语义。我们不仅展示了一套合成和现实世界数据集的基因性能改进,我们还表明我们学习了强健模型,这些模型能够持续地以有意义的方式将外推出出培训分布之外。此外,我们表明我们可以以可解释的方式控制基因化过程。