Discrete and continuous representations of content (e.g., of language or images) have interesting properties to be explored for the understanding of or reasoning with this content by machines. This position paper puts forward our opinion on the role of discrete and continuous representations and their processing in the deep learning field. Current neural network models compute continuous-valued data. Information is compressed into dense, distributed embeddings. By stark contrast, humans use discrete symbols in their communication with language. Such symbols represent a compressed version of the world that derives its meaning from shared contextual information. Additionally, human reasoning involves symbol manipulation at a cognitive level, which facilitates abstract reasoning, the composition of knowledge and understanding, generalization and efficient learning. Motivated by these insights, in this paper we argue that combining discrete and continuous representations and their processing will be essential to build systems that exhibit a general form of intelligence. We suggest and discuss several avenues that could improve current neural networks with the inclusion of discrete elements to combine the advantages of both types of representations.
翻译:对内容(例如语言或图像)的分解和连续的表达方式,对于通过机器理解或推理这种内容而言,具有值得探讨的有趣特性。本立场文件就离散和连续的表达方式的作用及其在深层学习领域的处理提出我们的意见。当前的神经网络模型计算着持续价值的数据。信息压缩成密度大、分布式的嵌入器。通过鲜明的对比,人类在与语言的交流中使用离散的符号。这些符号代表着一种压缩的世界版本,其含义来自共享的背景信息。此外,人类的推理涉及在认知层面上的符号操纵,这有利于抽象的推理、知识和理解的构成、概括化和高效的学习。受这些洞见的启发,我们在本文中认为,将离散和连续的表达方式及其处理方式结合起来,对于建立显示一般情报形式的系统至关重要。我们建议并讨论一些可以改进当前神经网络的渠道,同时纳入离散元素,将两种表达方式的优点结合起来。