A fascinating hypothesis is that human and animal intelligence could be explained by a few principles (rather than an encyclopedic list of heuristics). If that hypothesis was correct, we could more easily both understand our own intelligence and build intelligent machines. Just like in physics, the principles themselves would not be sufficient to predict the behavior of complex systems like brains, and substantial computation might be needed to simulate human-like intelligence. This hypothesis would suggest that studying the kind of inductive biases that humans and animals exploit could help both clarify these principles and provide inspiration for AI research and neuroscience theories. Deep learning already exploits several key inductive biases, and this work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing. The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities in terms of flexible out-of-distribution and systematic generalization, which is currently an area where a large gap exists between state-of-the-art machine learning and human intelligence.
翻译:令人着迷的假设是,人类和动物的智慧可以用少数原则来解释(而不是一个百科全书的理论清单 ) 。 如果这一假设是正确的,我们可以更容易地理解我们自己的智慧和建造智能机器。就像物理学一样,这些原则本身并不足以预测大脑等复杂系统的行为,而模拟人性智能可能需要大量计算。这一假设将表明,研究人类和动物所利用的那种感官偏见可以帮助澄清这些原则并为AI的研究和神经科学理论提供灵感。 深层次的学习已经利用了几个关键的感官偏见,而这项工作则考虑一个更大的清单,侧重于那些最涉及更高层次和更连续的有意识的处理。 澄清这些特定原则的目的是,它们有可能帮助我们建立AI系统,从人类在灵活的分配和系统化方面的能力中受益,而这正是目前最先进的机器学习和人类智能之间有着巨大差距的一个领域。