A central challenge in data-driven model discovery is the presence of hidden, or latent, variables that are not directly measured but are dynamically important. Takens' theorem provides conditions for when it is possible to augment these partial measurements with time delayed information, resulting in an attractor that is diffeomorphic to that of the original full-state system. However, the coordinate transformation back to the original attractor is typically unknown, and learning the dynamics in the embedding space has remained an open challenge for decades. Here, we design a custom deep autoencoder network to learn a coordinate transformation from the delay embedded space into a new space where it is possible to represent the dynamics in a sparse, closed form. We demonstrate this approach on the Lorenz, R\"ossler, and Lotka-Volterra systems, learning dynamics from a single measurement variable. As a challenging example, we learn a Lorenz analogue from a single scalar variable extracted from a video of a chaotic waterwheel experiment. The resulting modeling framework combines deep learning to uncover effective coordinates and the sparse identification of nonlinear dynamics (SINDy) for interpretable modeling. Thus, we show that it is possible to simultaneously learn a closed-form model and the associated coordinate system for partially observed dynamics.
翻译:数据驱动模型发现的一个中心挑战是存在隐藏的或潜在的变量,这些变量不是直接测量的,而是动态重要的。当有可能用时间延迟的信息来增加这些部分测量时,Gake's 理论提供了条件,使这些部分测量能够用时间延迟的信息来增加这些部分测量,从而导致吸引器的变异与原全状态系统的变异。然而,协调器向原始吸引器的变异通常不为人知,而学习嵌入空间的动态数十年来一直是一个开放的挑战。在这里,我们设计了一个自定义的深层自动编码网络,以学习从延迟嵌入空间到新空间的协调器,从而学习协调器,以便以稀疏的、封闭的形式代表动态。我们在Lorenz、R\'osler和Lotka-Volterra系统上展示了这一方法,从一个单一的测量变量中学习了动态。作为一个富有挑战的例子是,我们从一个从一个从混乱的水轮实验的视频中提取的单层变异体中学习了一个Lorenz模拟器。由此而形成的模拟框架将深度学习如何发现有效的坐标和稀暗微的辨辨别辨别,从而可以同时学习一个可理解的封闭的模型。