In the research area of reinforcement learning (RL), frequently novel and promising methods are developed and introduced to the RL community. However, although many researchers are keen to apply their methods on real-world problems, implementing such methods in real industry environments often is a frustrating and tedious process. Generally, academic research groups have only limited access to real industrial data and applications. For this reason, new methods are usually developed, evaluated and compared by using artificial software benchmarks. On one hand, these benchmarks are designed to provide interpretable RL training scenarios and detailed insight into the learning process of the method on hand. On the other hand, they usually do not share much similarity with industrial real-world applications. For this reason we used our industry experience to design a benchmark which bridges the gap between freely available, documented, and motivated artificial benchmarks and properties of real industrial problems. The resulting industrial benchmark (IB) has been made publicly available to the RL community by publishing its Java and Python code, including an OpenAI Gym wrapper, on Github. In this paper we motivate and describe in detail the IB's dynamics and identify prototypic experimental settings that capture common situations in real-world industry control problems.
翻译:在强化学习(RL)研究领域,经常制定新颖和有希望的方法,并向RL社区介绍。然而,尽管许多研究人员热衷于在现实世界问题上运用他们的方法,但在实际工业环境中采用这种方法往往是一个令人沮丧和乏味的过程。一般而言,学术研究团体获得实际工业数据和应用的机会有限,因此,通常通过使用人工软件基准来开发、评价和比较新方法。一方面,这些基准旨在提供可解释的RL培训情景,并详细了解方法的学习过程。另一方面,他们通常不与工业现实世界应用有多大的相似之处。为此,我们利用我们的工业经验设计了一种基准,以弥补可自由获得、有记录和有动机的人工基准和实际工业问题特性之间的差距。由此产生的工业基准(IB)通过在Github上公布其爪哇和Python代码,包括OpenAI Gym lob,向RL社区公开提供。在Github上我们详细介绍和描述IB的动态,并查明真实世界中常见的实验环境。