We introduce UViM, a unified approach capable of modeling a wide range of computer vision tasks. In contrast to previous models, UViM has the same functional form for all tasks; it requires no task-specific modifications which require extensive human expertise. The approach involves two components: (I) a base model (feed-forward) which is trained to directly predict raw vision outputs, guided by a learned discrete code and (II) a language model (autoregressive) that is trained to generate the guiding code. These components complement each other: the language model is well-suited to modeling structured interdependent data, while the base model is efficient at dealing with high-dimensional outputs. We demonstrate the effectiveness of UViM on three diverse and challenging vision tasks: panoptic segmentation, depth prediction and image colorization, where we achieve competitive and near state-of-the-art results. Our experimental results suggest that UViM is a promising candidate for a unified modeling approach in computer vision.
翻译:我们引入了UVIM, 这是一种统一的方法,能够建模广泛的计算机视觉任务。与以往的模式不同, UVIM对所有任务都具有同样的功能形式;它不需要需要广泛的人类专门知识的针对具体任务的修改。这个方法包括两个组成部分:(一) 基础模型(前进式),经过培训,可以直接预测原始的视觉产出,以一个学习的离散代码为指导;(二) 语言模型(航空),经过培训,可以生成指导代码。这些组成部分相互补充:语言模型非常适合结构化的相互依存数据模型,而基础模型在处理高维度产出方面是有效的。我们展示了UViM在三种不同而具有挑战性的任务上的有效性:广视区分割、深度预测和图像色化,我们在这个方面实现了竞争性和接近最新水平的结果。我们的实验结果表明, UViM是计算机视觉统一模型的有希望的候选者。