According to parallel distributed processing (PDP) theory in psychology, neural networks (NN) learn distributed rather than interpretable localist representations. This view has been held so strongly that few researchers have analysed single units to determine if this assumption is correct. However, recent results from psychology, neuroscience and computer science have shown the occasional existence of local codes emerging in artificial and biological neural networks. In this paper, we undertake the first systematic survey of when local codes emerge in a feed-forward neural network, using generated input and output data with known qualities. We find that the number of local codes that emerge from a NN follows a well-defined distribution across the number of hidden layer neurons, with a peak determined by the size of input data, number of examples presented and the sparsity of input data. Using a 1-hot output code drastically decreases the number of local codes on the hidden layer. The number of emergent local codes increases with the percentage of dropout applied to the hidden layer, suggesting that the localist encoding may offer a resilience to noisy networks. This data suggests that localist coding can emerge from feed-forward PDP networks and suggests some of the conditions that may lead to interpretable localist representations in the cortex. The findings highlight how local codes should not be dismissed out of hand.
翻译:根据平行分布式处理(PDP)心理学理论,神经网络(NN)学习分布式而不是可解释的当地代表,这种观点得到如此强烈,研究人员很少分析单一单位以确定这一假设是否正确,然而,最近心理学、神经科学和计算机科学的结果显示,在人工和生物神经网络中偶尔出现本地代码。在本文中,我们利用生成的输入和输出数据,对本地代码在进化前神经网络中出现时使用已知质量的传导神经元数据进行首次系统调查。我们发现,从NNN生成的本地代码数量在隐藏层神经元数量中进行了明确界定的分布,其高峰取决于投入数据的规模、提供的例子数量和输入数据的广度。使用1个热输出代码极大地减少了隐藏层的本地代码数量。随着隐性层应用的辍学比例增加,新出现本地代码的数量会增加,表明本地编码可能为噪音网络提供复原力。这一数据表明,本地编码可以从进进化式PDP网络中生成出,而本地代码的顶峰值会如何解释本地代码。