The long runtime of high-fidelity partial differential equation (PDE) solvers makes them unsuitable for time-critical applications. We propose to accelerate PDE solvers using reduced-order modeling (ROM). Whereas prior ROM approaches reduce the dimensionality of discretized vector fields, our continuous reduced-order modeling (CROM) approach builds a low-dimensional embedding of the continuous vector fields themselves, not their discretization. We represent this reduced manifold using continuously differentiable neural fields, which may train on any and all available numerical solutions of the continuous system, even when they are obtained using diverse methods or discretizations. We validate our approach on an extensive range of PDEs with training data from voxel grids, meshes, and point clouds. Compared to prior discretization-dependent ROM methods, such as linear subspace proper orthogonal decomposition (POD) and nonlinear manifold neural-network-based autoencoders, CROM features higher accuracy, lower memory consumption, dynamically adaptive resolutions, and applicability to any discretization. For equal latent space dimension, CROM exhibits 79$\times$ and 49$\times$ better accuracy, and 39$\times$ and 132$\times$ smaller memory footprint, than POD and autoencoder methods, respectively. Experiments demonstrate 109$\times$ and 89$\times$ wall-clock speedups over unreduced models on CPUs and GPUs, respectively. Videos and codes are available on the project page: https://crom-pde.github.io
翻译:长期的高度纤维化部分差异方程式(PDE)解析器(PDE)的长期运行时间使它们不适合时间紧迫的应用。我们提议使用减序模型(ROM)来加速PDE解析器。我们建议使用减序模型(ROM)来加速PDE解析器(ROM),而先前的ROM方法会降低离散矢量字段的维度,而我们连续的减序模型(CROM)方法则会建立对连续矢量字段本身的低维嵌式嵌入,而不是其离散。我们用连续系统的任何和所有可用数字解码字段来代表这种减序元,即使它们使用不同的方法或离散化方法来获得。我们用来自 voxel 电网、 meshes和点云的培训数据来验证我们关于广范围的PDE解码的解析解解解解解解解解方法。 与先前的离分解式矢量字段本身(POD)和非线性网络自动解析器(CROM)相比, CROM具有更高的精确度、记忆消耗量消耗量、动态适应性解算法(CMD)和页(C$39美元) 时间(39美元) 时间的精确度分别为39\时间和40时间项目和39时间、39美元和40时间、39美元、39时间的精确度(ODDDDD)项目和页) 超过。</s>