Modeling long range dependencies in sequential data is a fundamental step towards attaining human-level performance in many modalities such as text, vision, audio and video. While attention-based models are a popular and effective choice in modeling short-range interactions, their performance on tasks requiring long range reasoning has been largely inadequate. In an exciting result, Gu et al. (ICLR 2022) proposed the $\textit{Structured State Space}$ (S4) architecture delivering large gains over state-of-the-art models on several long-range tasks across various modalities. The core proposition of S4 is the parameterization of state matrices via a diagonal plus low rank structure, allowing efficient computation. In this work, we show that one can match the performance of S4 even without the low rank correction and thus assuming the state matrices to be diagonal. Our $\textit{Diagonal State Space}$ (DSS) model matches the performance of S4 on Long Range Arena tasks, speech classification on Speech Commands dataset, while being conceptually simpler and straightforward to implement.
翻译:在连续数据中进行长期依赖性建模,是在许多模式,如文字、视觉、音频和视频中实现人文水平绩效的基本步骤。在模拟短距离互动方面,基于关注的模型是一个普遍和有效的选择,但在进行需要长距离推理的任务方面,其绩效在很大程度上是不足的。令人振奋的结果是,Gu等人(ICLR 2022)提出了美元(textit{结构化国家空间)$(S4)架构,在各种模式的多个远程任务方面,在最先进的模型上取得了巨大收益。S4的核心主张是通过对等和低级别结构对国家矩阵进行参数化,允许有效计算。在这项工作中,我们表明即使没有低级别校正,也能匹配S4的绩效,从而假定国家矩阵是对立的。我们的美元(textit{diagalation Statespace}$(DSS)模型与S4在远程阿雷纳任务上的性能匹配,语言指令数据集的语音分类在概念上更为简单和直接。