Randomized ensemble classifiers (RECs), where one classifier is randomly selected during inference, have emerged as an attractive alternative to traditional ensembling methods for realizing adversarially robust classifiers with limited compute requirements. However, recent works have shown that existing methods for constructing RECs are more vulnerable than initially claimed, casting major doubts on their efficacy and prompting fundamental questions such as: "When are RECs useful?", "What are their limits?", and "How do we train them?". In this work, we first demystify RECs as we derive fundamental results regarding their theoretical limits, necessary and sufficient conditions for them to be useful, and more. Leveraging this new understanding, we propose a new boosting algorithm (BARRE) for training robust RECs, and empirically demonstrate its effectiveness at defending against strong $\ell_\infty$ norm-bounded adversaries across various network architectures and datasets.
翻译:在一个分类者在推论期间随机选择的随机混合分类器(RECs)中,一个分类器(RECs)已成为一种有吸引力的替代方法,可以替代传统的混合方法,在计算要求有限的情况下实现对抗性强的分类器。然而,最近的工作表明,现有的构建RECs的方法比最初声称的更容易受伤害,对其效力产生了重大怀疑,并引发了诸如“RECs何时有用”、“它们的限度是什么”和“我们如何训练它们”等基本问题。 在这项工作中,我们首先在它们理论限制、必要和足够条件以使它们有用和更多条件方面得出基本结果时,解开REs的神秘性。我们利用这一新的理解,我们提出了一种新的促进算法(BARRE),用于培训强大的RECs,并用经验证明它对于各种网络架构和数据集中坚硬的受规范约束的对手的自卫是有效的。