Explainable AI (XAI) methods are frequently applied to obtain qualitative insights about deep models' predictions. However, such insights need to be interpreted by a human observer to be useful. In this paper, we aim to use explanations directly to make decisions without human observers. We adopt two gradient-based explanation methods, Integrated Gradients (IG) and backprop, for the task of 3D object detection. Then, we propose a set of quantitative measures, named Explanation Concentration (XC) scores, that can be used for downstream tasks. These scores quantify the concentration of attributions within the boundaries of detected objects. We evaluate the effectiveness of XC scores via the task of distinguishing true positive (TP) and false positive (FP) detected objects in the KITTI and Waymo datasets. The results demonstrate an improvement of more than 100\% on both datasets compared to other heuristics such as random guesses and the number of LiDAR points in the bounding box, raising confidence in XC's potential for application in more use cases. Our results also indicate that computationally expensive XAI methods like IG may not be more valuable when used quantitatively compare to simpler methods.
翻译:解释性AI(XAI)方法经常用于获取关于深模型预测的定性洞察力。然而,这种洞察力需要由人类观察者加以解释才能有用。在本文件中,我们的目标是直接使用解释来在没有人类观察者的情况下作出决定。我们采用了两种基于梯度的解释方法,即“综合梯度”(IG)和“背丙型”,用于3D对象探测任务。然后,我们提出一套数量计量方法,称为“解释性浓度”(XC)分,可用于下游任务。这些分数量化了被检测到的物体边界内属性集中的程度。我们通过区分真实正数(TP)和假正数(FP)在KITTI和Waymo数据集中检测到的物体来评估 XC分数的有效性。结果显示,与随机猜测和捆绑框中LIDAR分数等其他超常数相比,这两个数据集的改进幅度超过100 ⁇,提高了对XC在更多使用的情况下应用的可能性的信心。我们的结果还表明,计算昂贵的XAI方法,如IG和WIG等计算方法在使用较简单的方法时可能没有价值。