Solving partial differential equations (PDEs) on shapes underpins many shape analysis and engineering tasks; yet, prevailing PDE solvers operate on polygonal/triangle meshes while modern 3D assets increasingly live as neural representations. This mismatch leaves no suitable method to solve surface PDEs directly within the neural domain, forcing explicit mesh extraction or per-instance residual training, preventing end-to-end workflows. We present a novel, mesh-free formulation that learns a local update operator conditioned on neural (local) shape attributes, enabling surface PDEs to be solved directly where the (neural) data lives. The operator integrates naturally with prevalent neural surface representations, is trained once on a single representative shape, and generalizes across shape and topology variations, enabling accurate, fast inference without explicit meshing or per-instance optimization while preserving differentiability. Across analytic benchmarks (heat equation and Poisson solve on sphere) and real neural assets across different representations, our method slightly outperforms CPM while remaining reasonably close to FEM, and, to our knowledge, delivers the first end-to-end pipeline that solves surface PDEs on both neural and classical surface representations. Code will be released on acceptance.
翻译:在形状上求解偏微分方程是许多形状分析与工程任务的基础;然而,主流的PDE求解器运行在多边形/三角形网格上,而现代三维资产越来越多地以神经表示形式存在。这种不匹配导致没有合适的方法直接在神经域内求解表面PDE,迫使进行显式的网格提取或针对每个实例的残差训练,从而阻碍了端到端工作流程的实现。我们提出了一种新颖的无网格公式,该公式学习一个以神经(局部)形状属性为条件的局部更新算子,使得表面PDE能够在(神经)数据所在的表示中直接求解。该算子与主流的神经表面表示自然集成,在单个代表性形状上训练一次,即可泛化到不同的形状和拓扑变化,从而实现准确、快速的推理,无需显式网格化或针对每个实例进行优化,同时保持可微性。在解析基准测试(球体上的热方程和泊松方程求解)以及不同表示下的真实神经资产上,我们的方法略微优于CPM,同时与FEM保持合理接近,并且据我们所知,它首次提供了能够在神经和经典表面表示上求解表面PDE的端到端流程。代码将在论文被接受后发布。