Stochastic neural networks (SNNs) are random functions and predictions are gained by averaging over multiple realizations of this random function. Consequently, an adversarial attack is calculated based on one set of samples and applied to the prediction defined by another set of samples. In this paper we analyze robustness in this setting by deriving a sufficient condition for the given prediction process to be robust against the calculated attack. This allows us to identify the factors that lead to an increased robustness of SNNs and helps to explain the impact of the variance and the amount of samples. Among other things, our theoretical analysis gives insights into (i) why increasing the amount of samples drawn for the estimation of adversarial examples increases the attack's strength, (ii) why decreasing sample size during inference hardly influences the robustness, and (iii) why a higher prediction variance between realizations relates to a higher robustness. We verify the validity of our theoretical findings by an extensive empirical analysis.
翻译:电磁神经网络(SNN)是随机的功能,并且通过在多次实现这一随机功能时的平均数来作出预测。因此,根据一组样本计算对抗性攻击,并将其应用于另一组样本界定的预测。在本文中,我们分析这一环境的稳健性,方法是为特定预测过程定出足够的条件,使其对所计算的攻击具有稳健性。这使我们能够确定导致SNN增强稳健性的因素,并有助于解释差异的影响和样本的数量。除其他外,我们的理论分析提供了以下几个方面的见解:(一) 为何为估计对抗性实例而采集的样本数量的增加会提高攻击的强度,(二) 为何在推断过程中样本规模的减少不会影响攻击的稳健性,以及(三) 为何实现之间更大的预测差异与较强性有关。我们通过广泛的实证分析来验证我们的理论结论的有效性。