Learning interpretable representations of neural dynamics at a population level is a crucial first step to understanding how neural activity relates to perception and behavior. Models of neural dynamics often focus on either low-dimensional projections of neural activity, or on learning dynamical systems that explicitly relate to the neural state over time. We discuss how these two approaches are interrelated by considering dynamical systems as representative of flows on a low-dimensional manifold. Building on this concept, we propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time-series data as a sparse combination of simpler, more interpretable components. The decomposed nature of the dynamics generalizes over previous switched approaches and enables modeling of overlapping and non-stationary drifts in the dynamics. We further present a dictionary learning-driven approach to model fitting, where we leverage recent results in tracking sparse vectors over time. We demonstrate that our model can learn efficient representations and smooth transitions between dynamical modes in both continuous-time and discrete-time examples. We show results on low-dimensional linear and nonlinear attractors to demonstrate that our decomposed dynamical systems model can well approximate nonlinear dynamics. Additionally, we apply our model to C. elegans data, illustrating a diversity of dynamics that is obscured when classified into discrete states.
翻译:了解神经活动如何与感知和行为相关是关键的第一步。神经动态模型往往侧重于神经活动的低维预测,或者学习与时空状态明确相关的动态系统。我们讨论了这两种方法如何相互关联,将动态系统视为低维流流的代表。基于这一概念,我们提议一种新的分解动态系统模型,代表时间序列数据的非静止和非线性动态,作为更简单、更可解释的组成部分的稀少组合。该动态模型的分解性质通常侧重于对先前的转接方法的概括性,并能够模拟与时空状态明确相关的神经状态的重叠和非静态流动。我们进一步提出一种由字典学驱动的模型安装方法,我们利用最近对低维量矢量的跟踪结果。我们展示我们的模型可以在连续时间和离散时间的示例中学习动态模式之间的高效表达和平稳过渡。我们可以展示低维线线和非线性吸引器的结果,以显示我们变异的动态模型是我们变异的、不易变的模型。当我们变异的模型被我们应用时,将一个不易变的模型用于我们变异的动态模型。