In this work, we argue for the importance of an online evaluation budget for a reliable comparison of deep offline RL algorithms. First, we delineate that the online evaluation budget is problem-dependent, where some problems allow for less but others for more. And second, we demonstrate that the preference between algorithms is budget-dependent across a diverse range of decision-making domains such as Robotics, Finance, and Energy Management. Following the points above, we suggest reporting the performance of deep offline RL algorithms under varying online evaluation budgets. To facilitate this, we propose to use a reporting tool from the NLP field, Expected Validation Performance. This technique makes it possible to reliably estimate expected maximum performance under different budgets while not requiring any additional computation beyond hyperparameter search. By employing this tool, we also show that Behavioral Cloning is often more favorable to offline RL algorithms when working within a limited budget.
翻译:在这项工作中,我们主张在线评价预算对于可靠比较深离线 RL 算法的重要性。 首先,我们指出,在线评价预算取决于问题,因为有些问题允许更少的 RL 算法,而另一些问题则允许更多。 其次,我们证明,在各种决策领域,如机器人、财务和能源管理方面,算法之间的偏好取决于预算。根据以上各点,我们建议在不同的在线评价预算下报告深离线 RL 算法的执行情况。为了便利这项工作,我们提议使用来自 NLP 域的报告工具“预期验证性能 ” 。 这一技术使得可以可靠地估计不同预算下预期的最高绩效,而除了超参数搜索之外不要求任何额外的计算。 我们还通过使用这一工具表明,在有限预算范围内工作时,“行为性克隆”往往更有利于离线 RL 算法。