Numerically solving partial differential equations (PDEs) often entails spatial and temporal discretizations. Traditional methods (e.g., finite difference, finite element, smoothed-particle hydrodynamics) frequently adopt explicit spatial discretizations, such as grids, meshes, and point clouds, where each degree-of-freedom corresponds to a location in space. While these explicit spatial correspondences are intuitive to model and understand, these representations are not necessarily optimal for accuracy, memory-usage, or adaptivity. In this work, we explore implicit neural representation as an alternative spatial discretization, where spatial information is implicitly stored in the neural network weights. With implicit neural spatial representation, PDE-constrained time-stepping translates into updating neural network weights, which naturally integrates with commonly adopted optimization time integrators. We validate our approach on a variety of classic PDEs with examples involving large elastic deformations, turbulent fluids, and multiscale phenomena. While slower to compute than traditional representations, our approach exhibits higher accuracy, lower memory consumption, and dynamically adaptive allocation of degrees of freedom without complex remeshing.
翻译:以数字方式解决部分差异方程式(PDEs)往往涉及空间和时间分解。传统方法(例如,有限差异、有限元素、平滑粒子流体动力学)经常采用明显的空间分解,如网格、meshes和点云等,其中每个自由度的云层都与空间位置相对应。虽然这些明确的空间对应对于模型和理解来说是不切实际的,但这些表达方式不一定对准确性、内存-使用性或适应性来说是最佳的。在这项工作中,我们探索隐含的神经代表作为替代空间分解方式,其中空间信息隐含存储在神经网络重量中。随着隐含的神经空间代表,PDE受限制的时间步态转化为神经网络重量的更新,而自然地与常用的优化时间分解器融合。我们验证了我们对各种经典PDE的处理办法,其中的例子包括大型弹性变形、内存流和多尺度现象。我们的方法比传统的表达速度要慢,但与传统表达方式相比,我们的方法显示出更高的准确性、较低的记忆消耗程度以及不复杂自由度的动态调整程度。