We describe a novel approach to explainable prediction of a continuous variable based on learning fuzzy weighted rules. Our model trains a set of weighted rules to maximise prediction accuracy and minimise an ontology-based 'semantic loss' function including user-specified constraints on the rules that should be learned in order to maximise the explainability of the resulting rule set from a user perspective. This system fuses quantitative sub-symbolic learning with symbolic learning and constraints based on domain knowledge. We illustrate our system on a case study in predicting the outcomes of behavioural interventions for smoking cessation, and show that it outperforms other interpretable approaches, achieving performance close to that of a deep learning model, while offering transparent explainability that is an essential requirement for decision-makers in the health domain.
翻译:我们描述一种基于学习模糊的加权规则对连续变量作出可解释的预测的新办法。我们的模型训练了一套加权规则,以最大限度地提高预测准确性,并最大限度地减少基于本体学的“断层损失”功能,包括对应当学习的规则的用户特定限制,以便从用户角度最大限度地解释由此制定的规则。这个系统将定量亚同义学习与基于域知识的象征性学习和限制结合起来。我们用一项案例研究来说明我们的系统,预测停止吸烟行为干预的结果,并表明它优于其他可解释的方法,达到接近深层学习模式的业绩,同时提供透明的解释性,这是卫生领域决策者的一项基本要求。