The computational complexity of the self-attention mechanism in Transformer models significantly limits their ability to generalize over long temporal durations. Memory-augmentation, or the explicit storing of past information in external memory for subsequent predictions, has become a constructive avenue for mitigating this limitation. We argue that memory-augmented Transformers can benefit substantially from considering insights from the memory literature in humans. We detail an approach for integrating evidence from the human memory system through the specification of cross-domain linking hypotheses. We then provide an empirical demonstration to evaluate the use of surprisal as a linking hypothesis, and further identify the limitations of this approach to inform future research.
翻译:变换模型中自留机制的计算复杂性严重限制了它们长期概括的能力。 记忆增强或将过去的信息明确储存在外部记忆中以备日后预测,已成为减轻这一限制的建设性途径。 我们争辩说,从人类记忆文献的洞察力中思考记忆增强的变异器可以大大受益。我们详细介绍了通过具体指明跨域连接假设将人类记忆系统的证据综合起来的方法。然后,我们提供了一个经验性演示,以评价将顺差用作连接假设的情况,并进一步确定为未来研究提供信息的这一方法的局限性。