Deep neural networks for computer vision are deployed in increasingly safety-critical and socially-impactful applications, motivating the need to close the gap in model performance under varied, naturally occurring imaging conditions. Robustness, ambiguously used in multiple contexts including adversarial machine learning, refers here to preserving model performance under naturally-induced image corruptions or alterations. We perform a systematic review to identify, analyze, and summarize current definitions and progress towards non-adversarial robustness in deep learning for computer vision. We find this area of research has received disproportionately less attention relative to adversarial machine learning, yet a significant robustness gap exists that manifests in performance degradation similar in magnitude to adversarial conditions. Toward developing a more transparent definition of robustness, we provide a conceptual framework based on a structural causal model of the data generating process and interpret non-adversarial robustness as pertaining to a model's behavior on corrupted images corresponding to low-probability samples from the unaltered data distribution. We identify key architecture-, data augmentation-, and optimization tactics for improving neural network robustness. This robustness perspective reveals that common practices in the literature correspond to causal concepts. We offer perspectives on how future research may mind this evident and significant non-adversarial robustness gap.
翻译:计算机视觉的深神经网络部署在日益安全的关键和社会影响越来越大的应用程序中,促使需要缩小在各种自然发生的成像条件下模型性能的差距。强健(在包括对抗性机器学习在内的多种情况下使用含糊不清)是指在自然引发的图像腐败或改变情况下保持模型性能。我们进行系统审查,以查明、分析和总结当前定义和在深层次计算机视觉的深层次学习中朝着非对抗性强力方向取得的进展。我们发现,与对抗性机器学习相比,这一研究领域得到的注意不成比例地少,但明显的稳健差距表现在性能退化与对抗性条件相似的程度上。为了制定更透明的强健性定义,我们提供了一个基于数据生成过程结构性因果模型的概念框架,并将非对抗性稳健性解释为与模型的行为相对应,与未改变性数据分布的低概率样本相对应。我们发现,这一研究领域在改善神经网络的稳健性方面的关键结构、数据增强和优化策略,但存在显著的稳健性差距。这一观点表明,文献中的常见做法与因果关系概念相符。我们提出了关于未来研究如何明显地进行对抗性研究的观点。