This paper presents a novel approach to automated playtesting for the prediction of human player behavior and experience. It has previously been demonstrated that Deep Reinforcement Learning (DRL) game-playing agents can predict both game difficulty and player engagement, operationalized as average pass and churn rates. We improve this approach by enhancing DRL with Monte Carlo Tree Search (MCTS). We also motivate an enhanced selection strategy for predictor features, based on the observation that an AI agent's best-case performance can yield stronger correlations with human data than the agent's average performance. Both additions consistently improve the prediction accuracy, and the DRL-enhanced MCTS outperforms both DRL and vanilla MCTS in the hardest levels. We conclude that player modelling via automated playtesting can benefit from combining DRL and MCTS. Moreover, it can be worthwhile to investigate a subset of repeated best AI agent runs, if AI gameplay does not yield good predictions on average.
翻译:本文介绍了一种用于预测人类玩家行为和经验的自动游戏测试新颖方法。 此前已经证明深强化学习游戏代理器可以预测游戏难度和玩家参与程度,可以以平均通过率和发球率运作。 我们通过与蒙特卡洛树搜索( MCTS)一起加强DRL来改进这一方法。 我们还激励了预测器特性的强化选择战略, 其依据的观察是,AI代理商的最佳行为表现能够产生比该代理商的平均表现更强的与人类数据的相关性。 两者的附加都不断提高预测的准确性,DRL增强的 MCTS在最强的级别上都超越了DRL和Vanilla MCTS。 我们的结论是,通过自动游戏测试模拟可以获益于DL和MCTS的结合。 此外,如果 AI 游戏无法平均地产生良好的预测结果, 则值得调查一个重复的最佳AI 代理商运行的子集。