The provision of natural language explanations for the predictions of deep-learning-based vehicle controllers is critical as it enhances transparency and easy audit. In this work, a state-of-the-art (SOTA) prediction and explanation model is thoroughly evaluated and validated (as a benchmark) on the new Sense--Assess--eXplain (SAX). Additionally, we developed a new explainer model that improved over the baseline architecture in two ways: (i) an integration of part of speech prediction and (ii) an introduction of special token penalties. On the BLEU metric, our explanation generation technique outperformed SOTA by a factor of 7.7 when applied on the BDD-X dataset. The description generation technique is also improved by a factor of 1.3. Hence, our work contributes to the realisation of future explainable autonomous vehicles.
翻译:提供深度学习基础车辆控制器预测的自然语言解释是至关重要的,因为它提高了透明度和易审计性。在本研究中,一种最先进的(SOTA)预测和解释模型在新的Sense--Assess--eXplain(SAX)上得到了彻底的评估和验证(作为基准)。此外,我们开发了一种新的解释器模型,其在两个方面改进了基线架构:(i)整合了词性预测,(ii)引入了特殊的标记惩罚。在BLEU度量上,我们的解释生成技术在BDD-X数据集上的表现比SOTA提高了7.7倍,同时描述生成技术也提高了1.3倍。因此,我们的工作有助于实现未来可解释的自动驾驶。