We study robustness to test-time adversarial attacks in the regression setting with $\ell_p$ losses and arbitrary perturbation sets. We address the question of which function classes are PAC learnable in this setting. We show that classes of finite fat-shattering dimension are learnable. Moreover, for convex function classes, they are even properly learnable. In contrast, some non-convex function classes provably require improper learning algorithms. We also discuss extensions to agnostic learning. Our main technique is based on a construction of an adversarially robust sample compression scheme of a size determined by the fat-shattering dimension.
翻译:我们用$\ ell_ p$ p$ 损失和任意扰动组合来研究在回归环境下测试时间对称攻击的稳健性。 我们处理哪个功能类在这个环境下可以学习 PAC 的问题。 我们显示, 有限的脂肪抖动维度的等级是可以学习的。 此外, 对于 convex 功能类来说, 它们甚至可以适当学习。 相反, 一些非 convex 功能类可能需要不适当的学习算法。 我们还讨论不可知性学习的扩展。 我们的主要技术是以一个由脂肪抖动维度维度所决定的大小的、 具有对抗力的样本压缩计划为基础。