Current advances in Artificial Intelligence (AI) and Machine Learning (ML) have achieved unprecedented impact across research communities and industry. Nevertheless, concerns about trust, safety, interpretability and accountability of AI were raised by influential thinkers. Many have identified the need for well-founded knowledge representation and reasoning to be integrated with deep learning and for sound explainability. Neural-symbolic computing has been an active area of research for many years seeking to bring together robust learning in neural networks with reasoning and explainability via symbolic representations for network models. In this paper, we relate recent and early research results in neurosymbolic AI with the objective of identifying the key ingredients of the next wave of AI systems. We focus on research that integrates in a principled way neural network-based learning with symbolic knowledge representation and logical reasoning. The insights provided by 20 years of neural-symbolic computing are shown to shed new light onto the increasingly prominent role of trust, safety, interpretability and accountability of AI. We also identify promising directions and challenges for the next decade of AI research from the perspective of neural-symbolic systems.
翻译:人工智能(AI)和机器学习(ML)的当前进展在研究界和行业中产生了前所未有的影响,然而,有影响力的思想家对AI的信任、安全、可解释性和问责制提出了关切,许多人指出,有必要将有充分依据的知识表述和推理与深层次的学习结合起来,以及合理解释性的解释性解释性解释性解释性解释性解释性解释性解释性研究;多年来,神经同步计算是一个积极研究领域,力求通过网络模型的象征性表示方式,将神经网络的强有力学习与推理性解释性解释性解释性解释性解释性解释结合起来;在本文件中,我们介绍了神经同步AI的最新早期研究结果,目的是确定下一波AI系统的关键成份;我们侧重于研究将基于神经网络的学习与象征性知识表述和逻辑推理相结合的原则性方法;20年的神经同步论计算所提供的洞察力为AI越来越突出的信任、安全、可解释性和问责性作用提供了新的启发。我们还从神经同步系统的角度确定了下一个十年内独立研究的有希望的方向和挑战。