Recent studies demonstrated that the adversarially robust learning under $\ell_\infty$ attack is harder to generalize to different domains than standard domain adaptation. How to transfer robustness across different domains has been a key question in domain adaptation field. To investigate the fundamental difficulty behind adversarially robust domain adaptation (or robustness transfer), we propose to analyze a key complexity measure that controls the cross-domain generalization: the adversarial Rademacher complexity over {\em symmetric difference hypothesis space} $\mathcal{H} \Delta \mathcal{H}$. For linear models, we show that adversarial version of this complexity is always greater than the non-adversarial one, which reveals the intrinsic hardness of adversarially robust domain adaptation. We also establish upper bounds on this complexity measure. Then we extend them to the ReLU neural network class by upper bounding the adversarial Rademacher complexity in the binary classification setting. Finally, even though the robust domain adaptation is provably harder, we do find positive relation between robust learning and standard domain adaptation. We explain \emph{how adversarial training helps domain adaptation in terms of standard risk}. We believe our results initiate the study of the generalization theory of adversarially robust domain adaptation, and could shed lights on distributed adversarially robust learning from heterogeneous sources, e.g., federated learning scenario.
翻译:最近的研究显示,在 $\ ell\ <unk> infty $ 攻击下,对抗性强的学习比标准域适应更难推广到不同的领域。 如何在不同领域转移对抗性学习是领域适应领域的一个关键问题。 要调查对抗性强的域适应(或强性转移)背后的根本困难, 我们提议分析控制跨领域概括的关键复杂度: 对抗性Rademacher复杂度, 校对性差异假设 $\ mathcal{H}\ Delta\ mathcal{H} 。 对于线性模型, 我们显示这种复杂性的对抗性版本总是大于非对抗性版本, 其中揭示了对抗性强的域适应的内在硬性。 我们还为这一复杂度措施设定了上限。 然后,我们通过在二进制分类设置中将对抗性Rademacher复杂度上约束, 将其推广到 ReLU 神经网络类。 最后, 即使强性域适应性适应性较强, 我们确实发现强的域适应和标准域适应之间的正比关系, 我们解释了激烈的理论理论的模型的模型的模型的模型的模型 有助于我们的研究领域 。</s>