We initiate the study of tolerant adversarial PAC-learning with respect to metric perturbation sets. In adversarial PAC-learning, an adversary is allowed to replace a test point $x$ with an arbitrary point in a closed ball of radius $r$ centered at $x$. In the tolerant version, the error of the learner is compared with the best achievable error with respect to a slightly larger perturbation radius $(1+\gamma)r$. This simple tweak helps us bridge the gap between theory and practice and obtain the first PAC-type guarantees for algorithmic techniques that are popular in practice. Our first result concerns the widely-used ``perturb-and-smooth'' approach for adversarial learning. For perturbation sets with doubling dimension $d$, we show that a variant of these approaches PAC-learns any hypothesis class $\mathcal{H}$ with VC-dimension $v$ in the $\gamma$-tolerant adversarial setting with $O\left(\frac{v(1+1/\gamma)^{O(d)}}{\varepsilon}\right)$ samples. This is in contrast to the traditional (non-tolerant) setting in which, as we show, the perturb-and-smooth approach can provably fail. Our second result shows that one can PAC-learn the same class using $\widetilde{O}\left(\frac{d.v\log(1+1/\gamma)}{\varepsilon^2}\right)$ samples even in the agnostic setting. This result is based on a novel compression-based algorithm, and achieves a linear dependence on the doubling dimension as well as the VC-dimension. This is in contrast to the non-tolerant setting where there is no known sample complexity upper bound that depend polynomially on the VC-dimension.
翻译:我们开始研究相对振动仪的容忍性对抗性 PAC 学习。 在对抗性 PAC 学习中, 允许对手在半径半径为$x美元以美元为单位的封闭球中以任意点取代一个测试点 $x$ 。 在容忍性版本中, 学习者的错误与在稍大一点的扰动半径$( 1 ⁇ gamma) $( H) 方面最好的可实现错误相比较。 这个简单的tweak 帮助我们缩小理论和实践之间的差距, 并获得对实践中流行的算法技术的首个 PAC 类型保证。 我们的第一个结果涉及广泛使用的“ perturb- smooot ” 的测试点 。 对于具有双倍维度的渗透组合, 我们显示这些假设级$( 1 ⁇ gamma) 的变式, 以 VC-dimenion $( 美元) 为基础, 以 left( 1+1/\\\\\\\\\\ gamma) 亚的平流值模式显示我们的一种变异的变数。