This paper is about the surprising interaction of a foundational result from model theory about stability of theories, which seems to be inherently about the infinite, with algorithmic stability in learning. Specifically, we develop a complete algorithmic analogue of Shelah's celebrated Unstable Formula Theorem, with algorithmic properties taking the place of the infinite. This draws on several new theorems as well as much recent work. In particular we introduce a new ``Probably Eventually Correct'' learning model, of independent interest, and characterize Littlestone (stable) classes in terms of this model; and we describe Littlestone classes via approximations, by analogy to definability of types in model theory.
翻译:本文研究了模型论中关于理论稳定性的基础性结果与学习算法中的算法稳定性的惊人相互作用,后者似乎本质上与无限相联系。具体地,我们开发了Shelah著名的Unstable Formula Theorem的完整算法模拟,其中算法属性取代了无限。这需要借助多个新定理以及最近的许多工作成果。特别地,我们介绍了一个“可能最终正确”的学习模型,具有独立的相关性,并利用该模型通过近似描述了Littlestone(稳定)类;我们通过类比模型论中的类型可定义性,描述了Littlestone类的近似形式。