Complex visual scenes that are composed of multiple objects, each with attributes, such as object name, location, pose, color, etc., are challenging to describe in order to train neural networks. Usually,deep learning networks are trained supervised by categorical scene descriptions. The common categorical description of a scene contains the names of individual objects but lacks information about other attributes. Here, we use distributed representations of object attributes and vector operations in a vector symbolic architecture to create a full compositional description of a scene in a high-dimensional vector. To control the scene composition, we use artificial images composed of multiple, translated and colored MNIST digits. In contrast to learning category labels, here we train deep neural networks to output the full compositional vector description of an input image. The output of the deep network can then be interpreted by a VSA resonator network, to extract object identity or other properties of indiviual objects. We evaluate the performance and generalization properties of the system on randomly generated scenes. Specifically, we show that the network is able to learn the task and generalize to unseen seen digit shapes and scene configurations. Further, the generalisation ability of the trained model is limited. For example, with a gap in the training data, like an object not shown in a particular image location during training, the learning does not automatically fill this gap.
翻译:由多个对象组成的复杂视觉场景,每个对象都包括名称、位置、姿态、颜色等属性,对于训练神经网络来说是个巨大挑战。通常,深度学习网络通过监督方式进行训练,使用分类场景描述。通常情况下,场景的分类描述包括单个对象的名称,但缺乏其他属性的信息。本文采用分布式对象属性表示和向量操作,在矢量符号体系结构中创建一个高维矢量的场景完整组合描述。为了控制场景的组合方式,我们使用多个平移和着色MNIST数字组成的人工图像。不同于学习分类标签,我们训练深度神经网络输出输入图像的完整组合向量描述。深度网络的输出可以通过一个VSA共振器网络来解释,从而提取单个对象的对象标识或其他属性。我们在随机生成的场景上评估系统的性能和泛化特性。具体来说,我们展示该网络能够学习任务并推广到查看过的数字形状和场景配置。进一步地,训练模型的泛化能力是有限的。例如,训练数据存在缺口,如在训练期间未在特定图像位置显示对象,则学习不会自动填补此缺口。