This paper contributes to the study of CPAC learnability -- a computable version of PAC learning -- by solving three open questions from recent papers. Firstly, we prove that every improperly CPAC learnable class is contained in a class which is properly CPAC learnable with polynomial sample complexity. This confirms a conjecture by Agarwal et al (COLT 2021). Secondly, we show that there exists a decidable class of hypothesis which is properly CPAC learnable, but only with uncomputably fast growing sample complexity. This solves a question from Sterkenburg (COLT 2022). Finally, we construct a decidable class of finite Littlestone dimension which is not improperly CPAC learnable, strengthening a recent result of Sterkenburg (2022) and answering a question posed by Hasrati and Ben-David (ALT 2023). Together with previous work, our results provide a complete landscape for the learnability problem in the CPAC setting.
翻译:本文通过解决最近论文中的三个未决问题,帮助研究CPAC的可学习性 -- -- 即PAC学习的可计算版本。 首先,我们证明,每一个不恰当的CPAC可学习类都包含在一个类别中,而CPAC的可学习性与多元样本复杂程度相当。这证实了Agarwal等人(COLT 2021)的推测。第二,我们表明,存在着一种可分解的假设类别,这是CPAC的可学习性,但只有无法比较的快速增长样本复杂性才能学习。这解决了斯特肯堡的一个问题(COLT 2022)。最后,我们构建了一个可分层的有限Littestone层面,CPAC的可学习性并非不适当,加强了Sterkenburg(2022)的近期成果,并回答了Hasrati和Ben-David(ALT 2023)提出的问题。我们的结果与先前的工作一道,为CPAC环境中的可学习性问题提供了完整的景观。