Recent advances in game informatics have enabled us to find strong strategies across a diverse range of games. However, these strategies are usually difficult for humans to interpret. On the other hand, research in Explainable Artificial Intelligence (XAI) has seen a notable surge in scholarly activity. Interpreting strong or near-optimal strategies or the game itself can provide valuable insights. In this paper, we propose two methods to quantify the feature importance using Shapley values: one for the game itself and another for individual AIs. We empirically show that our proposed methods yield intuitive explanations that resonate with and augment human understanding.
翻译:暂无翻译