In computer vision and natural language processing, innovations in model architecture that lead to increases in model capacity have reliably translated into gains in performance. In stark contrast with this trend, state-of-the-art reinforcement learning (RL) algorithms often use only small MLPs, and gains in performance typically originate from algorithmic innovations. It is natural to hypothesize that small datasets in RL necessitate simple models to avoid overfitting; however, this hypothesis is untested. In this paper we investigate how RL agents are affected by exchanging the small MLPs with larger modern networks with skip connections and normalization, focusing specifically on soft actor-critic (SAC) algorithms. We verify, empirically, that na\"ively adopting such architectures leads to instabilities and poor performance, likely contributing to the popularity of simple models in practice. However, we show that dataset size is not the limiting factor, and instead argue that intrinsic instability from the actor in SAC taking gradients through the critic is the culprit. We demonstrate that a simple smoothing method can mitigate this issue, which enables stable training with large modern architectures. After smoothing, larger models yield dramatic performance improvements for state-of-the-art agents -- suggesting that more "easy" gains may be had by focusing on model architectures in addition to algorithmic innovations.
翻译:在计算机视野和自然语言处理中,导致模型能力提高的模型架构创新可靠地转化为绩效收益。与这一趋势形成鲜明对比的是,最先进的强化学习(RL)算法往往只使用小型 MLP,而绩效收益通常来自算法创新。假设RL中的小型数据集需要简单的模型以避免过度配置,这是自然的;然而,这一假设是未经检验的。在本文中,我们调查了通过大型现代网络交换小型 MLP 来取代小型 MLP 来影响RL代理商如何受到影响,这些网络的连接和正常化特别侧重于软的演员-critic(SAC)算法。我们从经验上核实,采用这种最新强化学习算法往往只使用小型 MLPs,而绩效收益通常来自算法创新。然而,我们证明数据集大小并非限制因素,而是说,SAC的行为者通过批评器梯变的内在不稳定性是罪魁祸首。我们证明,简单的平滑动方法可以缓解这一问题,这有利于与大型现代架构进行稳定的培训。我们核实的是,“采用更平稳的模型后,能够使大型的模型产生显著的改进。”在结构上产生更平稳的改进。