Dissipative partial differential equations that exhibit chaotic dynamics tend to evolve to attractors that exist on finite-dimensional manifolds. We present a data-driven reduced order modeling method that capitalizes on this fact by finding the coordinates of this manifold and finding an ordinary differential equation (ODE) describing the dynamics in this coordinate system. The manifold coordinates are discovered using an undercomplete autoencoder -- a neural network (NN) that reduces then expands dimension. Then the ODE, in these coordinates, is approximated by a NN using the neural ODE framework. Both of these methods only require snapshots of data to learn a model, and the data can be widely and/or unevenly spaced. We apply this framework to the Kuramoto-Sivashinsky for different domain sizes that exhibit chaotic dynamics. With this system, we find that dimension reduction improves performance relative to predictions in the ambient space, where artifacts arise. Then, with the low-dimensional model, we vary the training data spacing and find excellent short- and long-time statistical recreation of the true dynamics for widely spaced data (spacing of ~0.7 Lyapunov times). We end by comparing performance with various degrees of dimension reduction, and find a "sweet spot" in terms of performance vs. dimension.
翻译:显示混乱动态的分解偏差方程式往往会演变成在有限维体上存在的吸引器。 我们展示了一种数据驱动的减序模型方法,通过找到这个多元体的坐标来利用这个事实, 并找到一个描述这个坐标系统动态的普通差异方程式( ODE) 。 多重坐标是通过一个不完全的自动编码器( 神经网络( NN) ) 来发现, 它会减少其维度。 然后, 在这些坐标中, 由NNE 使用神经代码框架来比较。 这两种方法都只需要对数据进行截取, 才能学习模型, 数据可以被广泛和/ 或不均匀的间距。 我们将这个框架应用到 Kuramoto- Sivashinsky 的不同域大小, 以显示混乱的动态。 有了这个系统, 我们发现, 与环境空间的预测相比, 人工制品产生。 然后, 与低维模型相比, 我们改变了培训数据的间隔, 并找到极短的短长的统计娱乐, 来广泛空间数据真实的动态( 0.7 We lap develop views) views a views a view views a views a vidududustrations.