The game industry has long been troubled by malicious activities utilizing game bots. The game bots disturb other game players and destroy the environmental system of the games. For these reasons, the game industry put their best efforts to detect the game bots among players' characters using the learning-based detections. However, one problem with the detection methodologies is that they do not provide rational explanations about their decisions. To resolve this problem, in this work, we investigate the explainabilities of the game bot detection. We develop the XAI model using a dataset from the Korean MMORPG, AION, which includes game logs of human players and game bots. More than one classification model has been applied to the dataset to be analyzed by applying interpretable models. This provides us explanations about the game bots' behavior, and the truthfulness of the explanations has been evaluated. Besides, interpretability contributes to minimizing false detection, which imposes unfair restrictions on human players.
翻译:游戏业长期以来一直受到使用游戏机器人的恶意活动的困扰。 游戏机器人扰乱了其他游戏玩家,并破坏了游戏的环境系统。 出于这些原因,游戏业尽了最大努力,利用学习检测方法在游戏玩家的字符中检测游戏机器人。 然而, 检测方法的一个问题是, 他们没有提供对其决定的合理解释。 为了解决这个问题, 我们在此工作中调查游戏机器人检测的可解释性。 我们利用韩国MMORPG, Aion的数据集开发了 XAI 模型, 其中包括人类游戏玩家和游戏机器人的游戏日志。 应用可解释的模型对数据集应用了一个以上的分类模型。 这为我们解释了游戏机器人的行为, 并评估了解释的真实性。 此外, 可解释性有助于减少对玩家的错误检测, 这对人类玩家施加了不公平的限制 。