If you ask a human to describe an image, they might do so in a thousand different ways. Traditionally, image captioning models are trained to approximate the reference distribution of image captions, however, doing so encourages captions that are viewpoint-impoverished. Such captions often focus on only a subset of the possible details, while ignoring potentially useful information in the scene. In this work, we introduce a simple, yet novel, method: "Image Captioning by Committee Consensus" ($IC^3$), designed to generate a single caption that captures high-level details from several viewpoints. Notably, humans rate captions produced by $IC^3$ at least as helpful as baseline SOTA models more than two thirds of the time, and $IC^3$ captions can improve the performance of SOTA automated recall systems by up to 84%, indicating significant material improvements over existing SOTA approaches for visual description. Our code is publicly available at https://github.com/DavidMChan/caption-by-committee
翻译:如果您要求人类描述图像, 他们可能会以千种不同的方式描述图像。 传统上, 图像字幕模型被训练以近似图像字幕的参考分布, 但是, 这样做会鼓励视觉上无法理解的字幕。 这些标题通常只关注可能的一小部分细节, 却忽略了现场可能有用的信息。 在这项工作中, 我们引入了一个简单而新颖的方法 : “ 委员会共识的图像控制” ( IC_ 3$ ), 旨在生成一个单一的字幕, 从几个角度捕捉到高层次的细节。 值得注意的是, 由 $IC_ 3$ 所制作的人类比例字幕, 至少在超过三分之二的时间里作为基线 SOTA 模型的有用, 和 $IC_ 3$ 标题可以提高SOTA自动调回系统的性能, 高达84%, 表明现有 SOTA 视觉描述方法的重大物质改进。 我们的代码可在 https://github.com/ DavidMChan/caption- by com com上公开查阅 。