We show that a neural network originally designed for language processing can learn the dynamical rules of a stochastic system by observation of a single dynamical trajectory of the system, and can accurately predict its emergent behavior under conditions not observed during training. We consider a lattice model of active matter undergoing continuous-time Monte Carlo dynamics, simulated at a density at which its steady state comprises small, dispersed clusters. We train a neural network called a transformer on a single trajectory of the model. The transformer, which we show has the capacity to represent dynamical rules that are numerous and nonlocal, learns that the dynamics of this model consists of a small number of processes. Forward-propagated trajectories of the trained transformer, at densities not encountered during training, exhibit motility-induced phase separation and so predict the existence of a nonequilibrium phase transition. Transformers have the flexibility to learn dynamical rules from observation without explicit enumeration of rates or coarse-graining of configuration space, and so the procedure used here can be applied to a wide range of physical systems, including those with large and complex dynamical generators.
翻译:我们显示,原本设计用于语言处理的神经网络可以通过观察系统单一动态轨迹来学习神经神经系统的动态规则,并且能够在培训期间未观察到的条件下准确地预测其突发行为。我们考虑的是正在连续时间进行蒙特卡洛动态的动态动态活性物质的细小模型,模拟时的密度是其稳定的状态由小的分散的集群组成。我们训练了一个神经网络,在模型的单一轨迹上称为变压器。我们所显示的变压器有能力代表大量和非局部的动态规则,它学会了这一模型的动态由少量过程组成。经过训练的变压器的前向式轨迹,在培训期间没有遇到的密度,展示了运动状态引发的阶段分离,从而预测存在无平衡阶段过渡。变压器具有从观测中学习动态规则的灵活性,而无需明确显示速度或配置空间的粗微分宽度,因此这里使用的程序可以应用于广泛的物理系统,包括拥有大型和复杂动态发电机的系统。