There is a concerted effort to build domain-general artificial intelligence in the form of universal neural network models with sufficient computational flexibility to solve a wide variety of cognitive tasks but without requiring fine-tuning on individual problem spaces and domains. To do this, models need appropriate priors and inductive biases, such that trained models can generalise to out-of-distribution examples and new problem sets. Here we provide an overview of the hallmarks endowing biological neural networks with the functionality needed for flexible cognition, in order to establish which features might also be important to achieve similar functionality in artificial systems. We specifically discuss the role of system-level distribution of network communication and recurrence, in addition to the role of short-term topological changes for efficient local computation. As machine learning models become more complex, these principles may provide valuable directions in an otherwise vast space of possible architectures. In addition, testing these inductive biases within artificial systems may help us to understand the biological principles underlying domain-general cognition.
翻译:为构建通用人工智能,需要开发具有足够计算灵活性,能够解决各种认知任务的通用神经网络模型,而无需在个别问题空间和领域上进行微调。为此,模型需要适当的先验和归纳偏差,使训练有素的模型能够推广到分布之外的示例和新的问题集。在本文中,我们提供了一份关于赋予生物神经网络灵活认知功能的标志性概述,以确定哪些特征在人工系统中实现类似功能也很重要。我们特别讨论了网络通信和循环的系统级分布以及短期拓扑变化在有效地进行本地计算时的作用。随着机器学习模型变得越来越复杂,这些原则可能会在一个否则广阔的可能的架构空间中提供有价值的方向。此外,在人工系统中测试这些归纳偏差可以帮助我们了解领域通用认知背后的生物学原理。