The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory. With variants ranging from classical settings like PAC learning and regression to recent trends such as adversarially robust learning, it's surprising that we still lack a unified theory; traditional proofs of the equivalence tend to be disparate, and rely on strong model-specific assumptions like uniform convergence and sample compression. In this work, we give the first model-independent framework explaining the equivalence of realizable and agnostic learnability: a three-line blackbox reduction that simplifies, unifies, and extends our understanding across a wide variety of settings. This includes models with no known characterization of learnability such as learning with arbitrary distributional assumptions and more general loss functions, as well as a host of other popular settings such as robust learning, partial learning, fair learning, and the statistical query model. More generally, we argue that the equivalence of realizable and agnostic learning is actually a special case of a broader phenomenon we call property generalization: any desirable property of a learning algorithm (e.g. noise tolerance, privacy, stability) that can be satisfied over finite hypothesis classes extends (possibly in some variation) to any learnable hypothesis class.
翻译:与可变和不可知的学习等等同是学习理论中的一个基本现象。 从古典环境(如PAC学习和回归)到对抗性强强学习等新趋势的变异,我们仍缺乏统一理论,令人惊讶的是,传统的等同证据往往不同,并依赖强有力的模型特定假设,如统一趋同和抽样压缩。在这项工作中,我们给出第一个解释可变和不可知学习等同的模型独立框架:三线黑盒减少,简化、统一和扩展我们的理解,跨越各种环境。这包括一些没有已知的可学性定性模型,例如以任意分配假设和比较一般的损失功能学习,以及大量其他流行环境,如强健学习、部分学习、公平学习和统计查询模型。更一般地说,我们争辩说,可变和不可知性学习的等同实际上是一个更宽泛现象的特殊案例:任何可取的学习算法属性(例如噪音容忍、隐私、稳定性),可以令人信服的等级变换。