In reinforcement learning (RL) research, simulations enable benchmarks between algorithms, as well as prototyping and hyper-parameter tuning of agents. In order to promote RL both in research and real-world applications, frameworks are required which are on the one hand efficient in terms of running experiments as fast as possible. On the other hand, they must be flexible enough to allow the integration of newly developed optimization techniques, e.g. new RL algorithms, which are continuously put forward by an active research community. In this paper, we introduce Karolos, a RL framework developed for robotic applications, with a particular focus on transfer scenarios with varying robot-task combinations reflected in a modular environment architecture. In addition, we provide implementations of state-of-the-art RL algorithms along with common learning-facilitating enhancements, as well as an architecture to parallelize environments across multiple processes to significantly speed up experiments. The code is open source and published on GitHub with the aim of promoting research of RL applications in robotics.
翻译:在强化学习(RL)研究中,模拟使各种算法以及原型和超参数的物剂调制之间有了基准。为了在研究和现实世界应用中促进RL,需要框架,框架一方面在尽可能快地进行实验方面效率较高。另一方面,框架必须足够灵活,以便整合新开发的优化技术,例如由活跃的研究界不断提出的新的RL算法。在本文件中,我们介绍了为机器人应用开发的Karolos(Karolos)框架,特别侧重于模块化环境结构中体现的各种机器人-任务组合的传输情景。此外,我们提供最先进的RL算法以及共同的学习-加速增强,以及一个将各种进程的环境平行化以大大加快实验的架构。代码是开放的,在GitHub上公布,目的是促进机器人机器人应用RL应用的研究。