Deep learning models have been shown to be vulnerable to adversarial attacks. This perception led to analyzing deep learning models not only from the perspective of their performance measures but also their robustness to certain types of adversarial attacks. We take another step forward in relating the architectural structure of neural networks from a graph theoretic perspective to their robustness. We aim to investigate any existing correlations between graph theoretic properties and the robustness of Sparse Neural Networks. Our hypothesis is, that graph theoretic properties as a prior of neural network structures are related to their robustness. To answer to this hypothesis, we designed an empirical study with neural network models obtained through random graphs used as sparse structural priors for the networks. We additionally investigated the evaluation of a randomly pruned fully connected network as a point of reference. We found that robustness measures are independent of initialization methods but show weak correlations with graph properties: higher graph densities correlate with lower robustness, but higher average path lengths and average node eccentricities show negative correlations with robustness measures. We hope to motivate further empirical and analytical research to tightening an answer to our hypothesis.
翻译:深层学习模型被证明很容易受到对抗性攻击。 这种感知不仅从性能衡量标准的角度,而且从它们对某些类型的对抗性攻击的稳健性的角度,对深层学习模型进行分析。 我们在将神经网络的建筑结构从图形理论角度与其稳健性联系起来方面又向前迈出了一步。 我们的目标是调查图形理论特性与松动神经网络的稳健性之间的任何现有关联。 我们的假设是,将神经网络结构先前的理论性特征作为神经网络结构的先导,与其稳健性相关。 为了应对这一假设,我们设计了一个实验性研究,通过随机图作为网络的稀薄结构前科,获得神经网络模型。我们进一步调查了随机剪切完全连接的网络作为参照点的评估。 我们发现,稳性测量方法独立于初始化方法,但显示与图形特性的薄弱性。 较高的图表密度与较弱的强性相关,但平均路径长度和平均偏心性都与稳健性相关。 我们希望能激励进一步的经验性和分析性研究,以收紧我们的假设的答案。