End-to-end autonomous driving has great potential in the transportation industry. However, the lack of transparency and interpretability of the automatic decision-making process hinders its industrial adoption in practice. There have been some early attempts to use attention maps or cost volume for better model explainability which is difficult for ordinary passengers to understand. To bridge the gap, we propose an end-to-end transformer-based architecture, ADAPT (Action-aware Driving cAPtion Transformer), which provides user-friendly natural language narrations and reasoning for each decision making step of autonomous vehicular control and action. ADAPT jointly trains both the driving caption task and the vehicular control prediction task, through a shared video representation. Experiments on BDD-X (Berkeley DeepDrive eXplanation) dataset demonstrate state-of-the-art performance of the ADAPT framework on both automatic metrics and human evaluation. To illustrate the feasibility of the proposed framework in real-world applications, we build a novel deployable system that takes raw car videos as input and outputs the action narrations and reasoning in real time. The code, models and data are available at https://github.com/jxbbb/ADAPT.
翻译:然而,自动决策程序缺乏透明度和可解释性,妨碍了其在实践中的工业应用。一些早期尝试试图利用关注地图或成本量来更好地解释普通乘客难以理解的模型解释。为了缩小差距,我们提议建立一个基于端到端的变压器结构,ADAPT(Action-aware Driiving cAPtion Terverer),提供方便用户的自然语言解说和说明自动车辆控制和行动的每个决策步骤的推理。ADAPT通过共享的视频演示,共同培训驾驶标题任务和车辆控制预测任务。BDD-X(Berkeley DeepDrive eXprographation)的实验展示了自动计量和人文评价的ADAPT框架的最新性表现。为了说明拟议框架在现实应用中的可行性,我们建立了一个新型的可部署系统,将原始汽车录像作为投入和输出行动说明,并在真实时间里将行动解说/推理结果纳入。ADPT/AB/ADAB。