Sharpness-aware minimization (SAM) and related adversarial deep-learning methods can drastically improve generalization, but their underlying mechanisms are not yet fully understood. Here, we establish SAM as a relaxation of the Bayes objective where the expected negative-loss is replaced by the optimal convex lower bound, obtained by using the so-called Fenchel biconjugate. The connection enables a new Adam-like extension of SAM to automatically obtain reasonable uncertainty estimates, while sometimes also improving its accuracy. By connecting adversarial and Bayesian methods, our work opens a new path to robustness.
翻译:快速意识最小化(SAM)及相关的对抗性深层学习方法可以极大地改善一般化,但其基本机制尚未完全被理解。 在这里,我们把SAM确定为贝耶斯目标的放松,即预期的负损失被最佳螺旋下下限所取代,而后者则通过所谓的Fenchel双关而获得。这一连接使得类似于Adam的SAM的新型扩展能够自动获得合理的不确定性估计,有时还提高了其准确性。 通过连接对立和巴耶西亚方法,我们的工作开辟了一条通往稳健的新道路。</s>