We present a minimax optimal learner for the problem of learning predictors robust to adversarial examples at test-time. Interestingly, we find that this requires new algorithmic ideas and approaches to adversarially robust learning. In particular, we show, in a strong negative sense, the suboptimality of the robust learner proposed by Montasser, Hanneke, and Srebro (2019) and a broader family of learners we identify as local learners. Our results are enabled by adopting a global perspective, specifically, through a key technical contribution: the global one-inclusion graph, which may be of independent interest, that generalizes the classical one-inclusion graph due to Haussler, Littlestone, and Warmuth (1994). Finally, as a byproduct, we identify a dimension characterizing qualitatively and quantitatively what classes of predictors $\mathcal{H}$ are robustly learnable. This resolves an open problem due to Montasser et al. (2019), and closes a (potentially) infinite gap between the established upper and lower bounds on the sample complexity of adversarially robust learning.
翻译:有趣的是,我们发现,这需要新的算法理念和新方法来应对对抗性强的学习。特别是,我们从强烈的负面角度展示了蒙塔瑟、汉内克和斯雷布罗(2019年)提出的强健的学习者的亚优性,以及我们所认定的当地学习者大家庭。我们通过采用全球视角,具体地说,通过关键技术贡献,即全球一格包容图(可能具有独立兴趣),将豪斯勒、利特斯通和沃穆斯(1994年)的典型一格包容图(典型的一格)概括起来。最后,作为一个副产品,我们从质量和数量上确定哪些类别的预测者($\mathcal{H})是强有力的学习者。这解决了蒙塔瑟等人(2019年)的公开问题,并(可能)缩小了在辩论性强力学习的抽样复杂性上下层之间的无限差距。