The deployment of Deep Learning (DL) models is still precluded in those contexts where the amount of supervised data is limited. To answer this issue, active learning strategies aim at minimizing the amount of labelled data required to train a DL model. Most active strategies are based on uncertain sample selection, and even often restricted to samples lying close to the decision boundary. These techniques are theoretically sound, but an understanding of the selected samples based on their content is not straightforward, further driving non-experts to consider DL as a black-box. For the first time, here we propose a different approach, taking into consideration common domain-knowledge and enabling non-expert users to train a model with fewer samples. In our Knowledge-driven Active Learning (KAL) framework, rule-based knowledge is converted into logic constraints and their violation is checked as a natural guide for sample selection. We show that even simple relationships among data and output classes offer a way to spot predictions for which the model need supervision. The proposed approach (i) outperforms many active learning strategies in terms of average F1 score, particularly in those contexts where domain knowledge is rich. Furthermore, we empirically demonstrate that (ii) KAL discovers data distribution lying far from the initial training data unlike uncertainty-based strategies, (iii) it ensures domain experts that the provided knowledge is respected by the model on test data, and (iv) it can be employed even when domain-knowledge is not available by coupling it with a XAI technique. Finally, we also show that KAL is also suitable for object recognition tasks and, its computational demand is low, unlike many recent active learning strategies.
翻译:在监督数据数量有限的情况下,仍然无法部署深学习(DL)模型。为了回答这一问题,积极的学习战略旨在最大限度地减少培训DL模型所需的贴标签数据数量。大多数积极战略以不确定的样本选择为基础,甚至往往局限于靠近决定边界的样本。这些技术在理论上是健全的,但根据内容对选定样本的理解并不简单,进一步促使非专家将DL视为黑箱。我们第一次提出不同的做法,考虑到共同域知识并使非专家用户能够用较少样本来培训模型。在我们的知识驱动主动学习(KAL)框架内,基于规则的知识被转换为逻辑限制,其违反情况被检查为选择样本的自然指南。我们表明,数据与产出班之间即使简单的关系也能够发现模型需要监督的预测。拟议的方法(i)在平均F1评分方面比许多积极的学习战略要差,特别是在域知识丰富的情况下。此外,我们从经验学角度上证明(ii)基于规则的知识被转换为逻辑约束性数据测试,而最近的数据测试则是用模型来证明(rental droal dreal ex) 。我们用的是,在使用它所提供的数据分析模型时,最后用它就是用来证明数据测试。(ii) 数据是用来证明它所提供的数据是如何使用它所提供的数据分析。