In this article we study the problem of training intelligent agents using Reinforcement Learning for the purpose of game development. Unlike systems built to replace human players and to achieve super-human performance, our agents aim to produce meaningful interactions with the player, and at the same time demonstrate behavioral traits as desired by game designers. We show how to combine distinct behavioral policies to obtain a meaningful "fusion" policy which comprises all these behaviors. To this end, we propose four different policy fusion methods for combining pre-trained policies. We further demonstrate how these methods can be used in combination with Inverse Reinforcement Learning in order to create intelligent agents with specific behavioral styles as chosen by game designers, without having to define many and possibly poorly-designed reward functions. Experiments on two different environments indicate that entropy-weighted policy fusion significantly outperforms all others. We provide several practical examples and use-cases for how these methods are indeed useful for video game production and designers.
翻译:在本篇文章中,我们研究的是为开发游戏而使用强化学习培训智能剂的问题。与为取代人类玩家和实现超人类性能而建立的系统不同,我们的代理商旨在与玩家产生有意义的互动,同时显示游戏设计者所期望的行为特征。我们展示了如何将不同的行为政策结合起来,以获得包含所有这些行为的有意义的“融合”政策。为此,我们提出了将培训前政策相结合的四种不同的政策融合方法。我们进一步展示了这些方法如何与反强化学习相结合使用,以便创造智能剂,使其具有游戏设计者所选择的特定行为风格,而不必定义许多可能设计不当的奖励功能。对两种不同环境的实验表明,酶加权政策融合明显优于所有其他环境。我们为这些方法如何真正有益于视频游戏制作和设计者提供了几个实用的例子和使用案例。