With the impact of real-time processing being realized in the recent past, the need for efficient implementations of reinforcement learning algorithms has been on the rise. Albeit the numerous advantages of Bellman equations utilized in RL algorithms, they are not without the large search space of design parameters. This research aims to shed light on the design space exploration associated with reinforcement learning parameters, specifically that of Policy Iteration. Given the large computational expenses of fine-tuning the parameters of reinforcement learning algorithms, we propose an auto-tuner-based ordinal regression approach to accelerate the process of exploring these parameters and, in return, accelerate convergence towards an optimal policy. Our approach provides 1.82x peak speedup with an average of 1.48x speedup over the previous state-of-the-art.
翻译:随着最近实现实时处理的影响,高效实施强化学习算法的必要性不断上升。尽管在RL算法中使用的Bellman等式有许多优点,但它们并非没有设计参数的庞大搜索空间。这项研究旨在阐明与强化学习参数,特别是政策迭代参数有关的空间探索设计。鉴于微调强化学习算法参数的计算费用巨大,我们提议采用基于自动测试的正反回归法,以加速探索这些参数的进程,并反过来加速向最佳政策的趋同。我们的方法提供了1.82x的峰值加速,平均速度超过以前最先进的水平1.48x加速速度。</s>