An important aspect of artificial intelligence (AI) is the ability to reason in a step-by-step "algorithmic" manner that can be inspected and verified for its correctness. This is especially important in the domain of question answering (QA). We argue that the challenge of algorithmic reasoning in QA can be effectively tackled with a "systems" approach to AI which features a hybrid use of symbolic and sub-symbolic methods including deep neural networks. Additionally, we argue that while neural network models with end-to-end training pipelines perform well in narrow applications such as image classification and language modelling, they cannot, on their own, successfully perform algorithmic reasoning, especially if the task spans multiple domains. We discuss a few notable exceptions and point out how they are still limited when the QA problem is widened to include other intelligence-requiring tasks. However, deep learning, and machine learning in general, do play important roles as components in the reasoning process. We propose an approach to algorithm reasoning for QA, Deep Algorithmic Question Answering (DAQA), based on three desirable properties: interpretability, generalizability and robustness which such an AI system should possess and conclude that they are best achieved with a combination of hybrid and compositional AI.
翻译:人工智能(AI)的一个重要方面是能够以逐步的“算法”方式思考,从而可以检查和核实其正确性。这在回答问题(QA)领域特别重要。我们争辩说,对人工智能而言,算法推理的挑战可以用一种“系统”方法来有效解决,该方法的特点是混合使用象征性和亚同义方法,包括深神经网络。此外,我们争辩说,虽然带有端到端训练管道的神经网络模型在图像分类和语言建模等狭义应用方面效果良好,但它们本身无法成功地进行算法推理,特别是在任务跨越多个领域的情况下。我们讨论少数显著的例外,并指出当QA问题扩大到包括其他需要情报的任务时,它们是如何仍然受到限制的。然而,深层次的学习和一般的机器学习在推理过程中起着重要作用。我们提出了一种对QA的算法推理法推理方法,深理解问题解问题解问题解问题(DAQQA),它们本身无法成功地进行算推理推理,特别是在任务涉及多个领域的情况下。我们讨论了一些显著的例外,并指出,并指出在QAA问题的范围扩大时,但在QAIAI的可解释性和最佳的组合上,这种系统应该达到最佳的组合。