Modern day computer games have extremely large state and action spaces. To detect bugs in these games' models, human testers play the games repeatedly to explore the game and find errors in the games. Such gameplay is exhaustive and time consuming. Moreover, since robotics simulators depend on similar methods of model specification and debugging, the problem of finding errors in the model is of interest to the robotics community to ensure robot behaviors and interactions are consistent in simulators. Previous methods have used reinforcement learning arXiv:2103.13798 and search based methods (Chang, 2019, (Chang, 2021) arXiv:1811.06962 including Rapidly-exploring Random Trees (RRT) to explore a game's state-action space to find bugs. However, such search and exploration based methods are not efficient at exploring the state-action space without a pre-defined heuristic. In this work we attempt to combine a human-tester's expertise in solving games, and the RRT's exhaustiveness to search a game's state space efficiently with high coverage. This paper introduces Cloning Assisted RRT (CA-RRT) to test a game through search. We compare our methods to two existing baselines: 1) a weighted-RRT as described by arXiv:1812.03125; 2) human demonstration seeded RRT as described by Chang et. al. We find CA-RRT is applicable to more game maps and explores more game states in fewer tree expansions/iterations when compared to the existing baselines. In each test, CA-RRT reached more states on average in the same number of iterations as weighted-RRT. In our tested environments, CA-RRT reached the same number of states as weighted-RRT by more than 5000 fewer iterations on average, almost a 50% reduction and applied to more scenarios than. Moreover, as a consequence of our first person behavior cloning approach, CA-RRT worked on unseen game maps than just seeding the RRT with human demonstrated states.
翻译:现代计算机游戏的状态和行动空间非常大。 为了检测这些游戏模型中的错误, 人类测试者会反复玩游戏, 以探索游戏中的游戏, 并发现游戏中的错误。 这种游戏游戏是详尽无遗的, 耗时。 此外, 由于机器人模拟器依赖类似的模型规格和调试方法, 模型中发现错误的问题对于机器人界来说并不有效, 以确保机器人行为和互动在模拟器中的一致性。 在这项工作中, 我们试图将人类测试反转: 2103.13798 和搜索方法( Chang, 2019, (Chang, 2021, ARXiv: 181.06962, 包括快速解析随机树(RRRT) 来探索游戏的状态动作空间以查找错误。 然而, 这样的搜索和探索方法对于探索国家行动空间来说并不有效, 没有预先定义的超常定义。 我们试图将人类测试的数值与普通的数值结合起来, 在游戏中, RRT 比较游戏中, 以快速的状态空间与高超常的数值 。