We present a unified computational theory of an agent's perception and memory. In our model, perception, episodic memory, and semantic memory are realized by different operational modes of the oscillating interactions between a symbolic index layer and a subsymbolic representation layer. The two layers form a bilayer tensor network (BTN). Although memory appears to be about the past, its main purpose is to support the agent in the present and the future. Recent episodic memory provides the agent with a sense of the here and now. Remote episodic memory retrieves relevant past experiences to provide information about possible future scenarios. This aids the agent in decision-making. "Future" episodic memory, based on expected future events, guides planning and action. Semantic memory retrieves specific information, which is not delivered by current perception, and defines priors for future observations. We argue that it is important for the agent to encode individual entities, not just classes and attributes. We demonstrate that a form of self-supervised learning can acquire new concepts and refine existing ones. We test our model on a standard benchmark data set, which we expanded to contain richer representations for attributes, classes, and individuals. Our key hypothesis is that obtaining a better understanding of perception and memory is a crucial prerequisite to comprehending human-level intelligence.
翻译:我们提出了一个关于代理人感知和记忆的统一计算理论。 在我们的模型中, 感知、 偶发记忆和语义记忆是通过象征性索引层和子符号代表层之间的振动互动的不同操作模式实现的。 两层形成双层感应网络( BTN ) 。 虽然记忆似乎与过去有关, 但其主要目的是支持现在和将来的代理人。 最近的偶发记忆为代理人提供了现在和现在的感知。 远程记忆会回溯过去的相关经验, 以提供有关未来可能发生的情况的信息。 这帮助了代理人的决策。 “ 未来” 偶发性记忆, 以预期的未来事件为基础, 指导规划和行动。 语义记忆检索了特定信息, 这些信息并非由当前的看法提供, 并定义了未来观察的先期。 我们说, 这对于代理人对单个实体进行编码非常重要, 而不仅仅是等级和属性。 我们证明, 一种自我强化的学习形式可以获取新的概念和现有概念。 我们测试一个基于标准基准数据模型的模型, 也就是一个我们获得更深层次的模型, 我们获得更深刻的印象的前提。