Self-supervised learning (SSL) learns to capture discriminative visual features useful for knowledge transfers. To better accommodate the object-centric nature of current downstream tasks such as object recognition and detection, various methods have been proposed to suppress contextual biases or disentangle objects from contexts. Nevertheless, these methods may prove inadequate in situations where object identity needs to be reasoned from associated context, such as recognizing or inferring tiny or obscured objects. As an initial effort in the SSL literature, we investigate whether and how contextual associations can be enhanced for visual reasoning within SSL regimes, by (a) proposing a new Self-supervised method with external memories for Context Reasoning (SeCo), and (b) introducing two new downstream tasks, lift-the-flap and object priming, addressing the problems of "what" and "where" in context reasoning. In both tasks, SeCo outperformed all state-of-the-art (SOTA) SSL methods by a significant margin. Our network analysis revealed that the proposed external memory in SeCo learns to store prior contextual knowledge, facilitating target identity inference in the lift-the-flap task. Moreover, we conducted psychophysics experiments and introduced a Human benchmark in Object Priming dataset (HOP). Our results demonstrate that SeCo exhibits human-like behaviors.
翻译:自我监督学习(SSL)通过学习捕捉有用于知识传输的区分性视觉特征。为更好地适应当前下游任务(如物体识别和检测)的以对象为中心的性质,各种方法已经被提出来抑制上下文偏见或将对象从上下文中分离出来。然而,在需要从相关的上下文中推理出对象身份的情况下,这些方法可能证明是不足的,例如识别或推测微小或遮挡的对象。作为SSL文献中的一项初始努力,我们调查了上下文关联是否可以通过自我监督学习机制来增强视觉推理,通过(a)提出一种新的用于上下文推理的带外部记忆的自监督学习方法(SeCo),以及(b)引入两个新的下游任务,即掀开盖子和物体参数化,解决上下文推理的“what”和“where”的问题。在两个任务中,SeCo的表现均优于所有现有SSL方法。我们的网络分析揭示了SeCo中所提出的外部记忆学习存储先前的上下文知识,从而方便掀开盖子任务中的目标身份推断。此外,我们进行了心理物理实验并在物体参数化数据集中引入了人类基准模型(HOP)。我们的结果表明SeCo表现出人类般的行为。