Deep Neural Networks (DNNs) draw their power from the representations they learn. However, while being incredibly effective in learning complex abstractions, they are susceptible to learning malicious concepts, due to the spurious correlations inherent in the training data. So far, existing methods for uncovering such artifactual behavior in trained models focus on finding artifacts in the input data, which requires both availability of a data set and human supervision. In this paper, we introduce DORA (Data-agnOstic Representation Analysis): the first data-agnostic framework for the analysis of the representation space of DNNs. We propose a novel distance measure between representations that utilizes self-explaining capabilities within the network itself without access to any data and quantitatively validate its alignment with human-defined semantic distances. We further demonstrate that this metric could be utilized for the detection of anomalous representations, which may bear a risk of learning unintended spurious concepts deviating from the desired decision-making policy. Finally, we demonstrate the practical utility of DORA by analyzing and identifying artifactual representations in widely popular Computer Vision models.
翻译:深神经网络(DNNS)从他们所学的演示中汲取力量。然而,虽然在学习复杂的抽象学方面非常有效,但是由于培训数据所固有的虚假关联性,它们很容易学习恶意概念。到目前为止,在经过培训的模型中发现这种非自然行为的现有方法侧重于在输入数据中找到文物,这既需要数据集的可用性,也需要人的监督。在本文件中,我们引入DORA(Data-An-An-Ostic Inviduction Aviduction):这是分析DNS代表空间的第一个数据-不可知性框架。我们建议了一种新颖的距离尺度,即利用网络内自我展示的能力,而不能获得任何数据,并量化地验证其与人类定义的语义距离的一致性。我们进一步证明,可以利用这一尺度来探测异常现象的表述,这可能会有学习无意中的虚假概念的风险,这些概念偏离了理想的决策政策。最后,我们通过分析和确定广泛流行的计算机视觉模型中的非自然表现形式,来证明DRA的实用性作用。</s>