In computer vision and natural language processing, innovations in model architecture that increase model capacity have reliably translated into gains in performance. In stark contrast with this trend, state-of-the-art reinforcement learning (RL) algorithms often use small MLPs, and gains in performance typically originate from algorithmic innovations. It is natural to hypothesize that small datasets in RL necessitate simple models to avoid overfitting; however, this hypothesis is untested. In this paper we investigate how RL agents are affected by exchanging the small MLPs with larger modern networks with skip connections and normalization, focusing specifically on actor-critic algorithms. We empirically verify that naively adopting such architectures leads to instabilities and poor performance, likely contributing to the popularity of simple models in practice. However, we show that dataset size is not the limiting factor, and instead argue that instability from taking gradients through the critic is the culprit. We demonstrate that spectral normalization (SN) can mitigate this issue and enable stable training with large modern architectures. After smoothing with SN, larger models yield significant performance improvements -- suggesting that more "easy" gains may be had by focusing on model architectures in addition to algorithmic innovations.
翻译:在计算机视野和自然语言处理中,提高模型能力的模型架构创新在提高模型能力后可靠地转化为绩效收益。与这一趋势形成鲜明对比的是,最先进的强化学习(RL)算法经常使用小型 MLP,而绩效收益通常来自算法创新。假设RL中的小型数据集需要简单的模型以避免过度配置,这是自然的;然而,这一假设是未经测试的。在本文件中,我们调查了将小型MLP与大网络交换成连接和正常化的大型现代网络对RL代理器的影响,特别侧重于演员-批评算法。我们从经验上证实,采用这类结构导致不稳定和不良性能,可能助长简单模型在实践中的受欢迎程度。然而,我们表明,数据集的规模不是限制因素,相反,我们争辩说,通过批评取梯度的不稳定性是罪魁祸首。我们证明,光谱正常化(SN)可以缓解这一问题,并且能够使大型现代建筑进行稳定的培训。在与SN平滑之后,较大的模型可以带来显著的性能改进。我们从中推断出,更“较“容易”的”的造价算。