In real-world applications of machine learning, reliable and safe systems must consider measures of performance beyond standard test set accuracy. These other goals include out-of-distribution (OOD) robustness, prediction consistency, resilience to adversaries, calibrated uncertainty estimates, and the ability to detect anomalous inputs. However, improving performance towards these goals is often a balancing act that today's methods cannot achieve without sacrificing performance on other safety axes. For instance, adversarial training improves adversarial robustness but sharply degrades other classifier performance metrics. Similarly, strong data augmentation and regularization techniques often improve OOD robustness but harm anomaly detection, raising the question of whether a Pareto improvement on all existing safety measures is possible. To meet this challenge, we design a new data augmentation strategy utilizing the natural structural complexity of pictures such as fractals, which outperforms numerous baselines, is near Pareto-optimal, and roundly improves safety measures.
翻译:在实际应用机器学习时,可靠和安全的系统必须考虑超出标准测试设定精确度的业绩计量,其他目标包括分配外强、预测一致性、对对手的复原力、经校准的不确定性估计,以及检测异常投入的能力。然而,提高这些目标的绩效往往是一种平衡行为,在不牺牲其他安全轴的绩效的情况下,当今的方法是无法实现的。例如,对抗性培训提高了对立性强,但急剧降低了其他分类性能指标。同样,强有力的数据增强和规范化技术常常提高OOD的稳健性,但损害异常度的检测,提出能否改进现有所有安全措施的Pareto问题。为了应对这一挑战,我们设计了新的数据增强战略,利用形形形图的自然结构复杂性,如形形形图,这些形形形形图比许多基线都近于Pareto-opatim,并全面改进安全措施。