Early stopping based on the validation set performance is a popular approach to find the right balance between under- and overfitting in the context of supervised learning. However, in reinforcement learning, even for supervised sub-problems such as world model learning, early stopping is not applicable as the dataset is continually evolving. As a solution, we propose a new general method that dynamically adjusts the update to data (UTD) ratio during training based on under- and overfitting detection on a small subset of the continuously collected experience not used for training. We apply our method to DreamerV2, a state-of-the-art model-based reinforcement learning algorithm, and evaluate it on the DeepMind Control Suite and the Atari $100$k benchmark. The results demonstrate that one can better balance under- and overestimation by adjusting the UTD ratio with our approach compared to the default setting in DreamerV2 and that it is competitive with an extensive hyperparameter search which is not feasible for many applications. Our method eliminates the need to set the UTD hyperparameter by hand and even leads to a higher robustness with regard to other learning-related hyperparameters further reducing the amount of necessary tuning.
翻译:在监督学习中,基于验证集性能的早期停止是一种流行的方法,用来找到欠拟合和过拟合之间的平衡点。然而,在强化学习中,即使在类似世界模型学习的监督子问题中,由于数据集不断变化,早期停止也不适用。作为解决方案,我们提出了一种新的通用方法,在不使用于训练的不断收集的经验的一小部分上,基于欠拟合和过拟合检测动态调整更新至数据(UTD)比例。我们将此方法应用于DreamerV2上,DreamerV2是一种最先进的基于模型的强化学习算法,并在DeepMind控制套件和Atari 100k基准测试上进行了评估。结果表明,相对于DreamerV2中的默认设置,通过调整UTD比例,我们的方法可以更好地平衡欠估计和过估计。并且它与广泛的超参数搜索竞争,而对于许多应用程序来说超参数搜索是不可行的。我们的方法消除了手动设置UTD超参数的需要,甚至提高了鲁棒性,进一步减少了必要的调整量。