This paper describes problems in AI research and how the SP System (described in an appendix) may help to solve them. Most of the problems are described by leading researchers in AI in interviews with science writer Martin Ford, and reported by him in his book {\em Architects of Intelligence}. These problems are: the need to bridge the divide between symbolic and non-symbolic kinds of knowledge and processing; the tendency of deep neural networks (DNNs) to make large and unexpected errors in recognition; the need to strengthen the representation and processing of natural languages; the challenges of unsupervised learning; the need for a coherent account of generalisation; how to learn usable knowledge from a single exposure; how to achieve transfer learning; how to increase the efficiency of AI processing; the need for transparency in AI structures and processes; how to achieve varieties of probabilistic reasoning; the need for more emphasis on top-down strategies; how to minimise the risk of accidents with self-driving vehicles; the need for strong compositionality in AI knowledge; the challenges of commonsense reasoning and commonsense knowledge; establishing the importance of information compression in AI research; establishing the importance of a biological perspective in AI research; establishing whether knowledge in the brain is represented in `distributed' or `localist' form; how to bypassing the limited scope for adaptation in deep neural networks; the need to develop `broad AI'; and how to eliminate the problem of catastrophic forgetting.
翻译:本文描述了AI研究中的问题,以及SP系统(附录中描述的)如何帮助解决这些问题。大多数问题都由AI的主要研究人员在与科学作家Martin Ford的访谈中描述,并由他在《情报设计师》的书《情报设计师》中报告。这些问题包括:需要弥合象征性和非象征性知识和处理类型的差距;深神经网络(DNN)在认识上出现重大意外错误的趋势;需要加强自然语言的代表性和处理;非监督学习的挑战;需要一致地说明普遍性;如何从一次接触中学习有用的知识;如何实现转让学习;如何提高AI处理的效率;需要AI结构和过程的透明度;如何实现各种概率推理;需要更多地强调自上而下的战略;如何尽量减少自驾车辆发生事故的风险;需要如何消除AI知识的深刻构成问题;需要共同的推理和常识;需要从单一接触中学习有用的知识;如何实现转让学习;如何提高AI处理过程的效率;需要透明度;需要如何在AI研究中确立信息压抑性研究的重要性;在AI研究中确立生物研究中确立一种重要性;需要多少的生物学观点;需要尽量减少。