This paper presents Pix2Seq, a simple and generic framework for object detection. Unlike existing approaches that explicitly integrate prior knowledge about the task, we simply cast object detection as a language modeling task conditioned on the observed pixel inputs. Object descriptions (e.g., bounding boxes and class labels) are expressed as sequences of discrete tokens, and we train a neural net to perceive the image and generate the desired sequence. Our approach is based mainly on the intuition that if a neural net knows about where and what the objects are, we just need to teach it how to read them out. Beyond the use of task-specific data augmentations, our approach makes minimal assumptions about the task, yet it achieves competitive results on the challenging COCO dataset, compared to highly specialized and well optimized detection algorithms.
翻译:本文介绍了一个简单的、通用的物体探测框架( Pix2Seq ) 。 与明确整合先前对任务的了解的现有方法不同, 我们只是简单地将物体探测作为以观察到的像素输入为条件的一种语言模型任务。 对象描述( 如捆绑框和类标签)以离散符号序列表示, 我们训练神经网以感知图像并生成想要的序列。 我们的方法主要基于直觉, 如果神经网知道物体的位置和内容, 我们只需要教它如何阅读它们。 除了使用特定任务的数据增强外, 我们的方法对任务做了最低限度的假设, 但它在具有挑战性的COCO数据集上取得了竞争性的结果, 与高度专业化和优化的探测算法相比。