Deep neural networks for computer vision tasks are deployed in increasingly safety-critical and socially-impactful applications, motivating the need to close the gap in model performance under varied, naturally occurring imaging conditions. Robustness, ambiguously used in multiple contexts including adversarial machine learning, here then refers to preserving model performance under naturally-induced image corruptions or alterations. We perform a systematic review to identify, analyze, and summarize current definitions and progress towards non-adversarial robustness in deep learning for computer vision. We find that this area of research has received disproportionately little attention relative to adversarial machine learning, yet a significant robustness gap exists that often manifests in performance degradation similar in magnitude to adversarial conditions. To provide a more transparent definition of robustness across contexts, we introduce a structural causal model of the data generating process and interpret non-adversarial robustness as pertaining to a model's behavior on corrupted images which correspond to low-probability samples from the unaltered data distribution. We then identify key architecture-, data augmentation-, and optimization tactics for improving neural network robustness. This causal view of robustness reveals that common practices in the current literature, both in regards to robustness tactics and evaluations, correspond to causal concepts, such as soft interventions resulting in a counterfactually-altered distribution of imaging conditions. Through our findings and analysis, we offer perspectives on how future research may mind this evident and significant non-adversarial robustness gap.
翻译:计算机视觉任务深心神经网络部署在日益安全、关键和社会影响越来越大的应用程序中,这种深度神经网络日益部署在日益安全、对社会有影响的应用程序中,促使需要缩小在各种自然发生的成像条件下模型性能的差距。强性,在包括对抗机器学习在内的多种情况下使用含糊不清,然后这里指的是在自然引起的图像腐败或改变情况下保存模型性能。我们进行系统审查,以查明、分析和总结当前定义,以及在深入学习计算机视觉时实现非对抗性强性的进展。我们发现,与对立机器学习相比,这一研究领域很少受到过多的关注,但存在明显的强性差距,这往往表现为性能退化的程度与对抗性条件相似。为了更透明地界定各种背景的稳健性,我们引入了一个数据生成过程的结构性因果性模型,将非对抗性强性强性解释为一种模型行为,该模型与未改变的数据分布的低概率样本相对应。我们随后发现,与改善神经网络稳健性有关的结构、数据增强性和优化性差距策略,但这种强性强性差异性差距往往表现为与对抗性判断性强性判断我们当前文献中常见的常见做法,从而得出了强烈性分析。