Ensemble and auxiliary tasks are both well known to improve the performance of machine learning models when data is limited. However, the interaction between these two methods is not well studied, particularly in the context of deep reinforcement learning. In this paper, we study the effects of ensemble and auxiliary tasks when combined with the deep Q-learning algorithm. We perform a case study on ATARI games under limited data constraint. Moreover, we derive a refined bias-variance-covariance decomposition to analyze the different ways of learning ensembles and using auxiliary tasks, and use the analysis to help provide some understanding of the case study. Our code is open source and available at https://github.com/NUS-LID/RENAULT.
翻译:在数据有限的情况下,综合和辅助任务都是众所周知的,可以改进机器学习模型的性能;然而,这两个方法之间的相互作用没有得到很好的研究,特别是在深层强化学习方面;在本文件中,我们研究了共同和辅助任务与深层Q学习算法相结合的影响;我们在有限的数据限制下对ATARI游戏进行了案例研究;此外,我们得出了一个精细的偏差-差异-共变分解,以分析学习集合和使用辅助任务的不同方式,并利用分析来帮助人们了解案例研究。我们的代码是开源,可在https://github.com/NUS-LID/RENALUT上查阅。