Modeling spatiotemporal dynamical systems is a fundamental challenge in machine learning. Transformer models have been very successful in NLP and computer vision where they provide interpretable representations of data. However, a limitation of transformers in modeling continuous dynamical systems is that they are fundamentally discrete time and space models and thus have no guarantees regarding continuous sampling. To address this challenge, we present the Continuous Spatiotemporal Transformer (CST), a new transformer architecture that is designed for the modeling of continuous systems. This new framework guarantees a continuous and smooth output via optimization in Sobolev space. We benchmark CST against traditional transformers as well as other spatiotemporal dynamics modeling methods and achieve superior performance in a number of tasks on synthetic and real systems, including learning brain dynamics from calcium imaging data.
翻译:模拟时空动态系统是机器学习的一项根本挑战。 在NLP和计算机愿景中,变异模型非常成功,它们提供了可解释的数据表达方式。然而,变异器在连续动态系统建模方面的局限性在于,变异器是基本离散的时间和空间模型,因此对连续取样没有保障。为了应对这一挑战,我们介绍了连续超时变异器(CST),这是一个为连续系统建模设计的新的变异器结构。这个新框架保证通过Sobolev空间的优化实现持续和平稳的产出。我们用传统变异器和其他波多时动态模型方法作为CST的基准,并在合成和真实系统的若干任务中取得优异性,包括从钙成像数据中学习大脑动态。