Approaches based on refinement operators have been successfully applied to class expression learning on RDF knowledge graphs. These approaches often need to explore a large number of concepts to find adequate hypotheses. This need arguably stems from current approaches relying on myopic heuristic functions to guide their search through an infinite concept space. In turn, deep reinforcement learning provides effective means to address myopia by estimating how much discounted cumulated future reward states promise. In this work, we leverage deep reinforcement learning to accelerate the learning of concepts in $\mathcal{ALC}$ by proposing DRILL -- a novel class expression learning approach that uses a convolutional deep Q-learning model to steer its search. By virtue of its architecture, DRILL is able to compute the expected discounted cumulated future reward of more than $10^3$ class expressions in a second on standard hardware. We evaluate DRILL on four benchmark datasets against state-of-the-art approaches. Our results suggest that DRILL converges to goal states at least 2.7$\times$ faster than state-of-the-art models on all benchmark datasets. We provide an open-source implementation of our approach, including training and evaluation scripts as well as pre-trained models.
翻译:基于完善操作者的方法被成功地应用于在RDF知识图表上进行课堂表达学习。这些方法往往需要探索大量概念,以找到适当的假设。这种需要可以说来自目前依靠近视超光速功能指导其通过无限的概念空间进行搜索的方法。反过来,深层强化学习通过估计未来累积奖励国家的前景,为解决近视提供了有效手段。在这项工作中,我们利用深度强化学习来加速以美元(mathcal{ALC})学习概念的学习,方法是提出DRIL -- -- 一种新型的课堂表达学习方法,使用进化式深Q学习模型来指导搜索。DRILL凭借其结构,能够计算出预期的折扣累积未来在标准硬件第二版上超过10美3美元的班级表现。我们根据最先进的方法评估四个基准数据集。我们的结果显示,DRILLL会将目标组合到至少2.7美元(时间),这比所有基准数据集的状态模型更快。我们以开放的模型的形式,包括以开放的版本的形式,提供我们经过培训的原始模型。