We analyse the complexity of learning first-order queries in a model-theoretic framework for supervised learning introduced by (Grohe and Tur\'an, TOCS 2004). Previous research on the complexity of learning in this framework focussed on the question of when learning is possible in time sublinear in the background structure. Here we study the parameterized complexity of the learning problem. We have two main results. The first is a hardness result, showing that learning first-order queries is at least as hard as the corresponding model-checking problem, which implies that on general structures it is hard for the parameterized complexity class AW[*]. Our second main contribution is a fixed-parameter tractable agnostic PAC learning algorithm for first-order queries over sparse relational data (more precisely, over nowhere dense background structures).
翻译:我们分析了在(格罗厄和图尔安,TOCS,2004年)引入的监督性学习模式理论框架内学习一阶询问的复杂性。我们分析了在(格罗厄和图尔安,TOCS,2004年)引入的监管性学习模式理论框架中学习一阶询问的复杂性。我们以前关于这一框架中学习复杂性的研究侧重于在背景结构中何时在时间线下可以学习的问题。我们在这里研究学习学习问题的参数复杂性。我们有两个主要结果。第一个是难度结果,表明学习一阶询问至少和相应的模式检查问题一样困难,这意味着在一般结构中,参数化复杂等级AW[*]很难。我们的第二个主要贡献是针对稀薄关系数据(更精确地说,超稠密背景结构)的一阶查询的固定参数可可移动的不可知的 PAC学习算法。