We explore training deep neural network models in conjunction with physical simulations via partial differential equations (PDEs), using the simulated degrees of freedom as latent space for the neural network. In contrast to previous work, we do not impose constraints on the simulated space, but rather treat its degrees of freedom purely as tools to be used by the neural network. We demonstrate this concept for learning reduced representations. It is typically extremely challenging for conventional simulations to faithfully preserve the correct solutions over long time-spans with traditional, reduced representations. This problem is particularly pronounced for solutions with large amounts of small scale features. Here, data-driven methods can learn to restore the details as required for accurate solutions of the underlying PDE problem. We explore the use of physical, reduced latent space within this context, and train models such that they can modify the content of physical states as much as needed to best satisfy the learning objective. Surprisingly, this autonomy allows the neural network to discover alternate dynamics that enable a significantly improved performance in the given tasks. We demonstrate this concept for a range of challenging test cases, among others, for Navier-Stokes based turbulence simulations.
翻译:我们利用模拟自由度作为神经网络的潜在空间,探索利用部分差异方程式(PDEs)进行物理模拟来训练深神经网络模型,同时利用模拟自由度作为神经网络的潜在空间。与以往的工作不同,我们并不对模拟空间施加限制,而是将自由度仅仅视为神经网络使用的工具。我们展示了这种学习减少表达方式的概念。常规模拟通常极具挑战性,要忠实地保存正确的长期解决办法,同时使用传统的、减少的表达方式。对于大量小规模特征的解决方案来说,这个问题尤为突出。在这里,数据驱动方法可以学习如何恢复准确解决潜在的PDE问题所需的细节。我们探索在这种背景下使用物理的、减少的潜在空间,并训练模型,以便这些模型能够尽可能修改物理状态的内容,从而最好地满足学习目标。奇怪的是,这种自主性使得神经网络能够发现替代的动态,从而大大改进既定任务的绩效。我们为一系列具有挑战性的测试案例展示了这一概念,其中包括基于Navier-Stokes的模拟。