A proper parametrization of state transition matrices of linear state-space models (SSMs) followed by standard nonlinearities enables them to efficiently learn representations from sequential data, establishing the state-of-the-art on a large series of long-range sequence modeling benchmarks. In this paper, we show that we can improve further when the structural SSM such as S4 is given by a linear liquid time-constant (LTC) state-space model. LTC neural networks are causal continuous-time neural networks with an input-dependent state transition module, which makes them learn to adapt to incoming inputs at inference. We show that by using a diagonal plus low-rank decomposition of the state transition matrix introduced in S4, and a few simplifications, the LTC-based structural state-space model, dubbed Liquid-S4, achieves the new state-of-the-art generalization across sequence modeling tasks with long-term dependencies such as image, text, audio, and medical time-series, with an average performance of 87.32% on the Long-Range Arena benchmark. On the full raw Speech Command recognition, dataset Liquid-S4 achieves 96.78% accuracy with a 30% reduction in parameter counts compared to S4. The additional gain in performance is the direct result of the Liquid-S4's kernel structure that takes into account the similarities of the input sequence samples during training and inference.
翻译:在本文中,我们表明,如果像S4这样的结构性 SSM(S4)是由线性液体时间定序(LTC)州-空间模型(STC)给出的,我们可以进一步改进SSSM(S4)结构结构结构,例如S4。LTC神经网络是因果连续时间神经网络,带有一个输入依赖状态过渡模块,使其能够从顺序数据中有效地学习如何适应输入的投入。我们表明,通过使用S4中引入的状态过渡矩阵的对数和低级别脱压缩,以及一些简化,以LTC为基础的结构性状态空间模型(S4)由线性液体时间定序(LTC)国家-空间模型。LTC神经网络是连续神经网络,带有一个输入依赖输入的状态过渡模块,因此它们能够学习适应从推断得到的投入。我们显示,通过使用S4的对等分级加低级的状态转换矩阵脱位和低级别脱位,我们显示,使用S4的状态转换矩阵结构,LTC结构结构结构结构的结构性数据将达到96-RAN4的精确度排序。