Visual Question Answering (VQA) has become one of the key benchmarks of visual recognition progress. Multiple VQA extensions have been explored to better simulate real-world settings: different question formulations, changing training and test distributions, conversational consistency in dialogues, and explanation-based answering. In this work, we further expand this space by considering visual questions that include a spatial point of reference. Pointing is a nearly universal gesture among humans, and real-world VQA is likely to involve a gesture towards the target region. Concretely, we (1) introduce and motivate point-input questions as an extension of VQA, (2) define three novel classes of questions within this space, and (3) for each class, introduce both a benchmark dataset and a series of baseline models to handle its unique challenges. There are two key distinctions from prior work. First, we explicitly design the benchmarks to require the point input, i.e., we ensure that the visual question cannot be answered accurately without the spatial reference. Second, we explicitly explore the more realistic point spatial input rather than the standard but unnatural bounding box input. Through our exploration we uncover and address several visual recognition challenges, including the ability to infer human intent, reason both locally and globally about the image, and effectively combine visual, language and spatial inputs. Code is available at: https://github.com/princetonvisualai/pointingqa .
翻译:视觉问题解答(VQA)已经成为视觉识别进展的关键基准之一。多式VQA扩展已被探索,以更好地模拟真实世界的设置:不同的问题配方、不同的培训和测试分布、不同的对话和基于解释的回答。在这项工作中,我们通过考虑包含空间参照点的视觉问题进一步扩大了这一空间。指出是人类中几乎普遍的姿态,现实世界VQA可能涉及对目标区域的姿态。具体地说,我们(1)引入和激励点输入问题,作为VQA的延伸,(2)确定这一空间的三种新问题类别,(3)每个类别,同时引入基准数据集和一系列基线模型,以应对其独特的挑战。与以前的工作有两大区别。首先,我们明确设计基准,要求点投入,也就是说,我们确保视觉问题在没有空间参照的情况下无法准确解答。第二,我们明确探索更现实的空间点空间输入,而不是标准但非自然的束缚框输入。通过我们的探索,我们发现并解决各种视觉识别能力挑战,包括当地/视觉识别能力,在地理空间/图像中,我们发现并有效地将视觉识别和视觉识别能力推算。