Deep Learning has become overly complicated and has enjoyed stellar success in solving several classical problems like image classification, object detection, etc. Several methods for explaining these decisions have been proposed. Black-box methods to generate saliency maps are particularly interesting due to the fact that they do not utilize the internals of the model to explain the decision. Most black-box methods perturb the input and observe the changes in the output. We formulate saliency map generation as a sequential search problem and leverage upon Reinforcement Learning (RL) to accumulate evidence from input images that most strongly support decisions made by a classifier. Such a strategy encourages to search intelligently for the perturbations that will lead to high-quality explanations. While successful black box explanation approaches need to rely on heavy computations and suffer from small sample approximation, the deterministic policy learned by our method makes it a lot more efficient during the inference. Experiments on three benchmark datasets demonstrate the superiority of the proposed approach in inference time over state-of-the-arts without hurting the performance. Project Page: https://cvir.github.io/projects/rexl.html
翻译:深度学习已经变得过于复杂,在解决图像分类、物体探测等一些古老问题方面取得了显著的成功。 提出了几种解释这些决定的方法。 生成突出地图的黑盒方法特别有趣,因为它们没有利用模型的内部解释决定。 大多数黑盒方法干扰了输入并观察输出的变化。 我们制作了显著的地图,将其作为一个相继的搜索问题,并利用强化学习(RL)从输入图像中收集证据,最有力地支持分类者做出的决定。 这样的战略鼓励明智地寻找能够导致高质量解释的扰动。 成功的黑盒解释方法需要依赖重计算和受小样本近似的影响,而我们的方法所学的确定性政策则使其在推论期间效率大得多。 对三个基准数据集的实验表明拟议方法在推论时间超过状态艺术而无损业绩的优越性。 项目页面: https://cvir.github.io/productions/rexl.html。