Structured state space sequence (S4) models have recently achieved state-of-the-art performance on long-range sequence modeling tasks. These models also have fast inference speeds and parallelisable training, making them potentially useful in many reinforcement learning settings. We propose a modification to a variant of S4 that enables us to initialise and reset the hidden state in parallel, allowing us to tackle reinforcement learning tasks. We show that our modified architecture runs asymptotically faster than Transformers and performs better than LSTM models on a simple memory-based task. Then, by leveraging the model's ability to handle long-range sequences, we achieve strong performance on a challenging meta-learning task in which the agent is given a randomly-sampled continuous control environment, combined with a randomly-sampled linear projection of the environment's observations and actions. Furthermore, we show the resulting model can adapt to out-of-distribution held-out tasks. Overall, the results presented in this paper suggest that the S4 models are a strong contender for the default architecture used for in-context reinforcement learning
翻译:结构化的状态空间序列模型( S4) 近来在远程序列建模任务上取得了最先进的实绩。 这些模型还具有快速的推论速度和平行培训, 使这些模型在许多强化学习设置中可能有用。 我们建议修改 S4 的变体, 使我们可以同时开始并重置隐藏状态, 从而使我们能够处理强化学习任务 。 我们显示, 我们的修改后的建筑比变异器运行的速度比变异器慢, 并且比基于记忆的简单任务LSTM 模型运行得更好 。 然后, 通过利用模型处理远程序列的能力, 我们取得了一项具有挑战性的元学习任务的强性业绩, 在这种任务中, 代理被随机地标注连续控制环境, 并随机地标出对环境观察和行动进行线性预测 。 此外, 我们展示了由此产生的模型可以适应超出分布的任务 。 总的来说, 本文中显示的结果显示, S4 模型是用于文内加固学习的默认结构的强大竞争者。</s>