We investigate the parameterization of deep neural networks that by design satisfy the continuity equation, a fundamental conservation law. This is enabled by the observation that any solution of the continuity equation can be represented as a divergence-free vector field. We hence propose building divergence-free neural networks through the concept of differential forms, and with the aid of automatic differentiation, realize two practical constructions. As a result, we can parameterize pairs of densities and vector fields that always exactly satisfy the continuity equation, foregoing the need for extra penalty methods or expensive numerical simulation. Furthermore, we prove these models are universal and so can be used to represent any divergence-free vector field. Finally, we experimentally validate our approaches by computing neural network-based solutions to fluid equations, solving for the Hodge decomposition, and learning dynamical optimal transport maps.
翻译:我们调查深神经网络的参数化,这些网络的设计符合连续性方程式,即基本的保全法。通过观察,任何连续性方程式的解决方案都可以作为无差异矢量场来体现。因此,我们提议通过不同形式的概念,在自动区分的帮助下,建立无差异的神经网络,实现两个实际的构造。因此,我们可以将始终完全符合连续性方程式的密度和矢量场对齐,而不必使用额外的惩罚方法或昂贵的数字模拟。此外,我们证明这些模型是通用的,因此可以用来代表任何无差异矢量场。最后,我们通过计算基于神经网络的液态方程式解决方案、解决霍奇分解和学习动态最佳运输图,来实验我们的方法。