Artificial neural networks are exceptionally good in learning to detect correlations within data that are associated with specific outcomes. However, the black-box nature of such models can hinder the knowledge advancement in research fields by obscuring the decision process and preventing scientist to fully conceptualize predicted outcomes. Furthermore, domain experts like healthcare providers need explainable predictions to assess whether a predicted outcome can be trusted in high stakes scenarios and to help them incorporating a model into their own routine. Therefore, interpretable models play a crucial role for the incorporation of machine learning into high stakes scenarios like healthcare. In this paper we introduce Convolutional Motif Kernel Networks, a neural network architecture that incorporates learning a feature representation within a subspace of the reproducing kernel Hilbert space of the position-aware motif kernel function. The resulting model enables to directly interpret and validate prediction outcomes by providing a biologically and medically meaningful explanation without the need for additional \textit{post-hoc} analysis. We show that our model is able to robustly learn on small datasets and reaches state-of-the-art performance on relevant healthcare prediction tasks.
翻译:人工神经网络在学习如何在与具体结果相关的数据中发现关联性方面是非常好的。然而,这些模型的黑箱性质会妨碍研究领域的知识进步,因为它掩盖了决策过程,使科学家无法充分构思预测结果。此外,像医疗保健提供者这样的领域专家需要可解释的预测结果的预测性预测,以评估在高风险假设中是否可信任,并帮助他们将模型纳入自己的常规。因此,可解释模型在将机器学习纳入保健等高风险假设中发挥着关键作用。在本文件中,我们引入了Convolutional Motif Kernel网络,这是一个神经网络结构,它包含在位置-aware motif内核功能的再生产核心空间Hilbert空间的一个子空间内学习特征说明。由此形成的模型能够直接解释和验证预测结果,不需要额外的 textitit{ 后 hoc} 分析。我们表明,我们的模型能够强有力地学习小型数据集并达到相关保健预测任务的状态。