Recently, there have been breakthroughs in computer vision ("CV") models that are more generalizable with the advent of models such as CLIP and ALIGN. In this paper, we analyze CLIP and highlight some of the challenges such models pose. CLIP reduces the need for task specific training data, potentially opening up many niche tasks to automation. CLIP also allows its users to flexibly specify image classification classes in natural language, which we find can shift how biases manifest. Additionally, through some preliminary probes we find that CLIP can inherit biases found in prior computer vision systems. Given the wide and unpredictable domain of uses for such models, this raises questions regarding what sufficiently safe behaviour for such systems may look like. These results add evidence to the growing body of work calling for a change in the notion of a 'better' model--to move beyond simply looking at higher accuracy at task-oriented capability evaluations, and towards a broader 'better' that takes into account deployment-critical features such as different use contexts, and people who interact with the model when thinking about model deployment.
翻译:最近,随着CLIP和ALIGN等模型的出现,计算机视觉模型(“CV”)出现了一些突破,这些突破随着CLIP和ALIGN等模型的出现而更加普遍。在本文中,我们分析了CLIP,并强调了这些模型带来的一些挑战。CLIP减少了对任务特定培训数据的需求,有可能为自动化打开许多特殊的任务。CLIP还允许其用户灵活地用自然语言指定图像分类类别,我们发现这些分类可以改变偏见的明显程度。此外,通过一些初步调查,我们发现CLIP可以继承先前计算机视觉系统中发现的偏见。鉴于这些模型的使用范围广泛且不可预测,这就提出了这些模型的用途范围可能具有何种足够安全的行为特征的问题。这些结果为越来越多的工作提供了证据,要求改变“更好的”模型的概念,超越仅仅着眼于更精确的任务导向能力评估,而转向更广泛的“最精确的”“最美”概念,而将考虑到不同的使用环境等部署关键特征,以及在思考模型部署时与模型互动的人。