Despite recent advances in modern machine learning algorithms, the opaqueness of their underlying mechanisms continues to be an obstacle in adoption. To instill confidence and trust in artificial intelligence systems, Explainable Artificial Intelligence has emerged as a response to improving modern machine learning algorithms' explainability. Inductive Logic Programming (ILP), a subfield of symbolic artificial intelligence, plays a promising role in generating interpretable explanations because of its intuitive logic-driven framework. ILP effectively leverages abductive reasoning to generate explainable first-order clausal theories from examples and background knowledge. However, several challenges in developing methods inspired by ILP need to be addressed for their successful application in practice. For example, existing ILP systems often have a vast solution space, and the induced solutions are very sensitive to noises and disturbances. This survey paper summarizes the recent advances in ILP and a discussion of statistical relational learning and neural-symbolic algorithms, which offer synergistic views to ILP. Following a critical review of the recent advances, we delineate observed challenges and highlight potential avenues of further ILP-motivated research toward developing self-explanatory artificial intelligence systems.
翻译:尽管在现代机器学习算法方面最近有所进展,但其基本机制的不透明性仍然是采用这一方法的一个障碍。然而,为了对人工智能系统保持信心和信任,作为改进现代机器学习算法的解释性的一种对策,出现了可解释的人工智能; 象征性人工智能的子领域,即感化逻辑编程(ILP),由于其直观的逻辑驱动框架,在产生可解释的解释性解释方面发挥着很有希望的作用。 国际法理论研究所有效地利用绑架性推理从实例和背景知识中产生可解释的第一阶单词。然而,在开发由ILP启发的方法方面,需要应对若干挑战,才能使其在实践中成功应用。例如,现有的ILP系统往往拥有巨大的解决方案空间,而诱导的解决方案对噪音和扰动性非常敏感。本调查文件总结了ILP最近的进展,以及统计关系学习和神经-精神算法的讨论,为ICP提供了协同观点。在对最近的进展进行批判性审查后,我们描述了观察到的挑战,并着重指出了进一步ILP进行自我解释性研究的潜在途径。