Enemy strategies in turn-based games should be surprising and unpredictable. This study introduces Mirror Mode, a new game mode where the enemy AI mimics the personal strategy of a player to challenge them to keep changing their gameplay. A simplified version of the Nintendo strategy video game Fire Emblem Heroes has been built in Unity, with a Standard Mode and a Mirror Mode. Our first set of experiments find a suitable model for the task to imitate player demonstrations, using Reinforcement Learning and Imitation Learning: combining Generative Adversarial Imitation Learning, Behavioral Cloning, and Proximal Policy Optimization. The second set of experiments evaluates the constructed model with player tests, where models are trained on demonstrations provided by participants. The gameplay of the participants indicates good imitation in defensive behavior, but not in offensive strategies. Participant's surveys indicated that they recognized their own retreating tactics, and resulted in an overall higher player-satisfaction for Mirror Mode. Refining the model further may improve imitation quality and increase player's satisfaction, especially when players face their own strategies. The full code and survey results are stored at: https://github.com/YannaSmid/MirrorMode
翻译:回合制游戏中的敌方策略应具有意外性和不可预测性。本研究引入镜像模式,这是一种新的游戏模式,其中敌方人工智能模仿玩家的个人策略,以挑战他们不断改变游戏玩法。我们在Unity中构建了任天堂策略视频游戏《火焰纹章英雄》的简化版本,包含标准模式和镜像模式。第一组实验通过强化学习和模仿学习(结合生成对抗模仿学习、行为克隆和近端策略优化)找到了适合模仿玩家演示任务的模型。第二组实验通过玩家测试评估构建的模型,其中模型基于参与者提供的演示进行训练。参与者的游戏行为表明,模型在防御行为上实现了良好模仿,但在进攻策略上表现不足。参与者调查显示,他们识别出了自己的撤退战术,且镜像模式整体上获得了更高的玩家满意度。进一步优化模型可能提升模仿质量并增加玩家满意度,尤其是在玩家面对自身策略时。完整代码和调查结果存储于:https://github.com/YannaSmid/MirrorMode