An important aspect of artificial intelligence (AI) is the ability to reason in a step-by-step "algorithmic" manner that can be inspected and verified for its correctness. This is especially important in the domain of question answering (QA). We argue that the challenge of algorithmic reasoning in QA can be effectively tackled with a "systems" approach to AI which features a hybrid use of symbolic and sub-symbolic methods including deep neural networks. Additionally, we argue that while neural network models with end-to-end training pipelines perform well in narrow applications such as image classification and language modelling, they cannot, on their own, successfully perform algorithmic reasoning, especially if the task spans multiple domains. We discuss a few notable exceptions and point out how they are still limited when the QA problem is widened to include other intelligence-requiring tasks. However, deep learning, and machine learning in general, do play important roles as components in the reasoning process. We propose an approach to algorithm reasoning for QA, Deep Algorithmic Question Answering (DAQA), based on three desirable properties: interpretability, generalizability, and robustness which such an AI system should possess, and conclude that they are best achieved with a combination of hybrid and compositional AI.
翻译:人工智能(AI)的一个重要方面是能够以渐进式的“算法”方式思考,可以检查和核实其正确性。这在回答问题(QA)领域特别重要。我们争辩说,对人工智能(AI)的算法推理挑战可以通过“系统”方法有效解决,该方法的特点是混合使用象征性和亚同义方法,包括深神经网络。此外,我们争辩说,尽管带有端到端培训管道的神经网络模型在图像分类和语言建模等狭义应用方面表现良好,但它们本身无法成功地进行算法推理,特别是如果任务涉及多个领域。我们讨论少数显著的例外,并指出当质变问题扩大到包括其他需要情报的任务时,它们是如何仍然受到限制的。然而,深入的学习和一般的机学在推理过程中起着重要作用。我们提出了一种对QA的算法推理法推理方法,即深等图像分类和语言建模(DAQAQA),它们本身无法成功地进行算推理推理,特别是当任务涉及多个领域。我们讨论了几个显著的例外,并指出当的例外,并指出当质推理学问题扩大了QAIAI的组合,而具有最佳的可解释性,并取得最佳的组合。