Humans have remarkable capacity to reason abductively and hypothesize about what lies beyond the literal content of an image. By identifying concrete visual clues scattered throughout a scene, we almost can't help but draw probable inferences beyond the literal scene based on our everyday experience and knowledge about the world. For example, if we see a "20 mph" sign alongside a road, we might assume the street sits in a residential area (rather than on a highway), even if no houses are pictured. Can machines perform similar visual reasoning? We present Sherlock, an annotated corpus of 103K images for testing machine capacity for abductive reasoning beyond literal image contents. We adopt a free-viewing paradigm: participants first observe and identify salient clues within images (e.g., objects, actions) and then provide a plausible inference about the scene, given the clue. In total, we collect 363K (clue, inference) pairs, which form a first-of-its-kind abductive visual reasoning dataset. Using our corpus, we test three complementary axes of abductive reasoning. We evaluate the capacity of models to: i) retrieve relevant inferences from a large candidate corpus; ii) localize evidence for inferences via bounding boxes, and iii) compare plausible inferences to match human judgments on a newly-collected diagnostic corpus of 19K Likert-scale judgments. While we find that fine-tuning CLIP-RN50x64 with a multitask objective outperforms strong baselines, significant headroom exists between model performance and human agreement. We provide analysis that points towards future work.
翻译:人类有非同寻常的能力, 能够对图像的字面内容进行感知和假设。 我们通过辨别分布在现场各地的具体视觉线索, 几乎无法但只能根据日常经验和对世界的了解, 在字面场外进行可能的推断。 例如, 如果我们看到一条路旁的“ 20 mph” 标志, 我们可能会假设街道坐落在住宅区( 而不是高速公路上 ), 即使没有房屋被描绘。 机器能够执行类似的视觉推理 。 我们提出夏洛克( 夏洛克 ), 一套103K 的附加图像, 用于测试机器能力, 以模拟图像内容以外的引力推理。 我们采用了自由观模式: 参与者首先观察并识别图像( 例如, 物体, 动作), 然后根据线索提供对现场的推断。 我们收集了363K( 铜, 直观, 直观) 双对两组, 形成一种直观的感测的视觉推理学 。 我们用我们的本体, 我们测试三个相近似的模型, 和直径直径直径的直径分析。