We investigate the parameterization of deep neural networks that by design satisfy the continuity equation, a fundamental conservation law. This is enabled by the observation that solutions of the continuity equation can be represented as a divergence-free vector field. We hence propose building divergence-free neural networks through the concept of differential forms, and with the aid of automatic differentiation, realize two practical constructions. As a result, we can parameterize pairs of densities and vector fields that always satisfy the continuity equation by construction, foregoing the need for extra penalty methods or expensive numerical simulation. Furthermore, we prove these models are universal and so can be used to represent any divergence-free vector field. Finally, we experimentally validate our approaches on neural network-based solutions to fluid equations, solving for the Hodge decomposition, and learning dynamical optimal transport maps the Hodge decomposition, and learning dynamical optimal transport maps.
翻译:我们调查深神经网络的参数化,这些网络的设计符合连续性方程式,这是一条基本的保全法。通过观察,连续性方程式的解决方案可以代表无差异矢量场。因此,我们提议通过不同形式的概念,在自动区分的帮助下,建立无差异的神经网络,实现两个实际的构造。因此,我们可以将那些通过建造满足连续性方程式的密度和矢量场对齐,而不必使用额外的惩罚方法或昂贵的数字模拟。此外,我们证明这些模型是通用的,因此可以用来代表任何无差异矢量场。最后,我们实验验证了我们基于神经网络的液态方程式解决方案、Hodge分解的解决方案、以及学习动态最佳运输模式绘制Hodge分解图,以及学习动态最佳运输图。