Recent years have seen significant advances in explainable AI as the need to understand deep learning models has gained importance with the increased emphasis on trust and ethics in AI. Comprehensible models for sequential decision tasks are a particular challenge as they require understanding not only individual predictions but a series of predictions that interact with environmental dynamics. We present a framework for learning comprehensible models of sequential decision tasks in which agent strategies are characterized using temporal logic formulas. Given a set of agent traces, we first cluster the traces using a novel embedding method that captures frequent action patterns. We then search for logical formulas that explain the agent strategies in the different clusters. We evaluate our framework on combat scenarios in StarCraft II (SC2), using traces from a handcrafted expert policy and a trained reinforcement learning agent. We implemented a feature extractor for SC2 environments that extracts traces as sequences of high-level features describing both the state of the environment and the agent's local behavior from agent replays. We further designed a visualization tool depicting the movement of units in the environment that helps understand how different task conditions lead to distinct agent behavior patterns in each trace cluster. Experimental results show that our framework is capable of separating agent traces into distinct groups of behaviors for which our approach to strategy inference produces consistent, meaningful, and easily understood strategy descriptions.
翻译:近年来,在解释可解释的大赦国际方面取得了显著进展,因为理解深层次学习模式的必要性随着AI中更加强调信任和道德而变得日益重要。连续决定任务的可理解模式是一个特别的挑战,因为它们不仅需要了解个别预测,还需要一系列与环境动态互动的预测。我们提出了一个框架,用于学习可理解的连续决定任务模式的可理解模式,其中代理战略使用时间逻辑公式特征。根据一套代理物痕迹,我们首先使用一种能捕捉频繁行动模式的新型嵌入方法将痕迹集中起来。我们然后寻找解释不同集群代理物战略的逻辑公式。我们利用手动专家政策和经过培训的强化学习剂的痕迹,评估我们在StarCraft II(SC2)中的作战情景框架。我们为SC2环境设置了一个特征提取器,通过高层次特征的序列提取痕迹,描述环境状况和代理物从代理物的当地行为。我们进一步设计了一个可视化工具,用以描述环境中不同任务条件如何导致不同代理物行为模式在每一个跟踪组中的不同模式(SC2)。