Striking progress has recently been made in understanding human cognition by analyzing how its neuronal underpinnings are engaged in different modes of information processing. Specifically, neural information can be decomposed into synergistic, redundant, and unique features, with synergistic components being particularly aligned with complex cognition. However, two fundamental questions remain unanswered: (a) precisely how and why a cognitive system can become highly synergistic; and (b) how these informational states map onto artificial neural networks in various learning modes. To address these questions, here we employ an information-decomposition framework to investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks in both supervised and reinforcement learning settings. Our results show that synergy increases as neural networks learn multiple diverse tasks. Furthermore, performance in tasks requiring integration of multiple information sources critically relies on synergistic neurons. Finally, randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness. Overall, our results suggest that while redundant information is required for robustness to perturbations in the learning process, synergistic information is used to combine information from multiple modalities -- and more generally for flexible and efficient learning. These findings open the door to new ways of investigating how and why learning systems employ specific information-processing strategies, and support the principle that the capacity for general-purpose learning critically relies in the system's information dynamics.
翻译:最近,通过分析其神经基础如何在不同的信息处理模式中发挥作用,在理解人类认知方面取得了显著进展。具体地说,神经信息可以分解成协同、冗余和独特的特性,协同组成部分与复杂的认知特别一致。然而,两个根本问题仍然没有答案:(a) 精确地说明认知系统如何和为什么会变得高度协同;以及(b) 这些信息国如何在各种学习模式中将信息定位到人工神经网络。为了解决这些问题,我们使用信息分解框架来调查在监督和强化学习环境中执行各种认知动态任务的简单人工神经网络采用的信息处理战略。我们的结果显示,随着神经网络学习多种不同的任务,协同效应会增强。此外,需要整合多种信息来源的任务的绩效严重依赖于协同神经元。最后,在培训过程中,通过增加网络冗余的冗余能力来增加网络的冗余能力,与增强的强度相对应。总体而言,我们的结果表明,虽然在学习过程中需要冗余信息,但在学习过程中,协同信息被用来将信息从多种模式和强化的学习动态动态进行整合。我们的成果表明,如何以开放的方式和灵活和高效的学习。