Deep networks for Monocular Depth Estimation (MDE) have achieved promising performance recently and it is of great importance to further understand the interpretability of these networks. Existing methods attempt to provide posthoc explanations by investigating visual cues, which may not explore the internal representations learned by deep networks. In this paper, we find that some hidden units of the network are selective to certain ranges of depth, and thus such behavior can be served as a way to interpret the internal representations. Based on our observations, we quantify the interpretability of a deep MDE network by the depth selectivity of its hidden units. Moreover, we then propose a method to train interpretable MDE deep networks without changing their original architectures, by assigning a depth range for each unit to select. Experimental results demonstrate that our method is able to enhance the interpretability of deep MDE networks by largely improving the depth selectivity of their units, while not harming or even improving the depth estimation accuracy. We further provide a comprehensive analysis to show the reliability of selective units, the applicability of our method on different layers, models, and datasets, and a demonstration on analysis of model error. Source code and models are available at https://github.com/youzunzhi/InterpretableMDE .
翻译:远洋深度测深网络(MDE)最近取得了有希望的成绩,进一步理解这些网络的可解释性非常重要。现有的方法试图通过调查视觉提示来提供后部解释,而视觉提示可能无法探索深网络所学的内部陈述。在本文中,我们发现网络中一些隐藏的单位具有某些深度的选择性,因此,这种行为可以用作解释内部陈述的一种方式。根据我们的观察,我们用其隐藏单位的深度选择来量化深层MDE网络的可解释性。此外,我们然后提出一种方法,在不改变原始结构的情况下培训可解释的深层MDE网络,为每个单位指定一个深度范围来选择。实验结果表明,我们的方法能够提高深层MDE网络的可解释性,主要提高单位的深度选择性,同时不会损害甚至提高深度估测的准确性。我们提供了全面的分析,以显示选择性单位的可靠性、我们的方法在不同层次、模型和数据集的可适用性,以及模型分析错误的示范。源码和模型可在 http://zou/MDRaz/In。