Modern, state-of-the-art Convolutional Neural Networks (CNNs) in computer vision have millions of parameters. Thus, explaining the complex decisions of such networks to humans is challenging. A technical approach to reduce CNN complexity is network pruning, where less important parameters are deleted. The work presented in this paper investigates whether this technical complexity reduction also helps with perceived explainability. To do so, we conducted a pre-study and two human-grounded experiments, assessing the effects of different pruning ratios on CNN explainability. Overall, we evaluated four different compression rates (i.e., CPR 2, 4, 8, and 32) with 37 500 tasks on Mechanical Turk. Results indicate that lower compression rates have a positive influence on explainability, while higher compression rates show negative effects. Furthermore, we were able to identify sweet spots that increase both the perceived explainability and the model's performance.
翻译:计算机视野中最先进的现代进化神经网络(CNN)有数百万个参数。 因此,向人类解释这类网络的复杂决定具有挑战性。 降低CNN复杂程度的技术方法是网络运行,其中删除了不太重要的参数。 本文介绍的工作调查了技术复杂性的减少是否也有助于解释。 为此,我们进行了一项研究前和两项人类实验,评估了不同的裁剪比率对CNN解释能力的影响。 总的来说,我们评估了四种不同的压缩率(即CPR 2、4、8和32),其中37 500项任务是机械土耳其语。 结果表明,较低的压缩率对解释能力有积极的影响,而较高的压缩率则显示出消极的影响。 此外,我们找到了提高人们认知的可解释性和模型性能的甜点。