Transformer has been considered the dominating neural architecture in NLP and CV, mostly under a supervised setting. Recently, a similar surge of using Transformers has appeared in the domain of reinforcement learning (RL), but it is faced with unique design choices and challenges brought by the nature of RL. However, the evolution of Transformers in RL has not yet been well unraveled. Hence, in this paper, we seek to systematically review motivations and progress on using Transformers in RL, provide a taxonomy on existing works, discuss each sub-field, and summarize future prospects.
翻译:最近,在强化学习(RL)领域也出现了类似的使用变异器的激增,但是它面临着独特的设计选择和由变异器性质带来的挑战。 然而,变异器在变异器在变异器在变异器中的演进还没有被彻底打破。 因此,在本文中,我们试图系统地审查在变异器在变异器在变异器中使用的动机和进展,对现有工程进行分类,讨论每个子领域,并总结未来前景。