Representation is a key notion in neuroscience and artificial intelligence (AI). However, a longstanding philosophical debate highlights that specifying what counts as representation is trickier than it seems. With this brief opinion paper we would like to bring the philosophical problem of representation into attention and provide an implementable solution. We note that causal and teleological approaches often assumed by neuroscientists and engineers fail to provide a satisfactory account of representation. We sketch an alternative according to which representations correspond to inferred latent structures in the world, identified on the basis of conditional patterns of activation. These structures are assumed to have certain properties objectively, which allows for planning, prediction, and detection of unexpected events. We illustrate our proposal with the simulation of a simple neural network model. We believe this stronger notion of representation could inform future research in neuroscience and AI.
翻译:然而,长期的哲学辩论强调,具体说明何谓“代表”比看起来要复杂得多。通过这份简短的意见文件,我们想提请人们注意“代表”的哲学问题,并提供可执行的解决办法。我们注意到,神经科学家和工程师常常认为的因果关系和目的论方法不能令人满意地说明其代表性。我们勾画了一种替代办法,据此,“代表”与根据有条件的激活模式确定的世界上推断的潜伏结构相对应。这些结构假定具有某些客观的特性,从而可以规划、预测和探测意外事件。我们用简单的神经网络模型来说明我们的提议。我们认为,这种更强有力的“代表”概念可以为未来的神经科学和AI研究提供参考。