On playing video games, different players usually have their own playstyles. Recently, there have been great improvements for the video game AIs on the playing strength. However, past researches for analyzing the behaviors of players still used heuristic rules or the behavior features with the game-environment support, thus being exhausted for the developers to define the features of discriminating various playstyles. In this paper, we propose the first metric for video game playstyles directly from the game observations and actions, without any prior specification on the playstyle in the target game. Our proposed method is built upon a novel scheme of learning discrete representations that can map game observations into latent discrete states, such that playstyles can be exhibited from these discrete states. Namely, we measure the playstyle distance based on game observations aligned to the same states. We demonstrate high playstyle accuracy of our metric in experiments on some video game platforms, including TORCS, RGSK, and seven Atari games, and for different agents including rule-based AI bots, learning-based AI bots, and human players.
翻译:玩游戏时, 不同的玩家通常都有自己的游戏风格。 最近, 在游戏实力上, 游戏游戏的游戏AI 有了很大的改进 。 但是, 过去分析玩家行为的研究仍然使用游戏环境支持的游戏规则或行为特征, 从而让开发者穷尽了时间来定义不同游戏风格的特性。 在本文中, 我们直接从游戏观察和动作中提出游戏游戏游戏风格的首个衡量标准, 但没有事先说明目标游戏的游戏风格 。 我们建议的方法是建立在新颖的学习离散的演示方案上, 可以将游戏观测映射到潜在的离散状态, 这样的游戏模式可以从这些离散状态展示出来。 也就是说, 我们根据与相同状态相匹配的游戏观察来测量游戏模式的距离。 我们在一些视频游戏平台, 包括 TORCS、 RGSK 和 7 Atari 游戏的测试中展示了高游戏风格的精度。 我们展示了我们在一些视频游戏平台, 包括基于规则的 AI 机器人、 以学习为基础的 AI 机器人 和 人类玩家 。