This paper addresses the dire need for a platform that efficiently provides a framework for running reinforcement learning (RL) experiments. We propose the CaiRL Environment Toolkit as an efficient, compatible, and more sustainable alternative for training learning agents and propose methods to develop more efficient environment simulations. There is an increasing focus on developing sustainable artificial intelligence. However, little effort has been made to improve the efficiency of running environment simulations. The most popular development toolkit for reinforcement learning, OpenAI Gym, is built using Python, a powerful but slow programming language. We propose a toolkit written in C++ with the same flexibility level but works orders of magnitude faster to make up for Python's inefficiency. This would drastically cut climate emissions. CaiRL also presents the first reinforcement learning toolkit with a built-in JVM and Flash support for running legacy flash games for reinforcement learning research. We demonstrate the effectiveness of CaiRL in the classic control benchmark, comparing the execution speed to OpenAI Gym. Furthermore, we illustrate that CaiRL can act as a drop-in replacement for OpenAI Gym to leverage significantly faster training speeds because of the reduced environment computation time.
翻译:本文阐述了对一个平台的迫切需要,该平台为开展强化学习(RL)实验提供一个有效框架。我们提议CairL环境工具包,作为培训学习人员的一种高效、兼容和更可持续的替代方法,并提议开发更高效的环境模拟方法。人们越来越重视发展可持续的人工智能。然而,几乎没有努力提高运行环境模拟的效率。最受欢迎的强化学习发展工具包OpenAI Gym是用一种强大但缓慢的编程语言Python建造的。我们提议用C++写成的工具包,具有同样的灵活性,但工作规模要更快,以弥补Python的无效。这将极大地减少气候排放。CairL还提出了第一个强化学习工具包,内建一个JVM,并快速支持运行遗留的闪电游戏,以加强学习研究。我们展示了CairL在经典控制基准中的有效性,将执行速度与OpenAI Gym进行比较。此外,我们说明CairL可以作为OnAI Gym的投放置替换器,以大幅加快培训速度,因为环境计算时间缩短。