Helping end users comprehend the abstract distribution shifts can greatly facilitate AI deployment. Motivated by this, we propose a novel task, dataset explanation. Given two image data sets, dataset explanation aims to automatically point out their dataset-level distribution shifts with natural language. Current techniques for monitoring distribution shifts provide inadequate information to understand datasets with the goal of improving data quality. Therefore, we introduce GSCLIP, a training-free framework to solve the dataset explanation task. In GSCLIP, we propose the selector as the first quantitative evaluation method to identify explanations that are proper to summarize dataset shifts. Furthermore, we leverage this selector to demonstrate the superiority of a generator based on language model generation. Systematic evaluation on natural data shift verifies that GSCLIP, a combined system of a hybrid generator group and an efficient selector is not only easy-to-use but also powerful for dataset explanation at scale.
翻译:帮助终端用户理解抽象分布变化可以极大地促进 AI 的部署。 基于此, 我们提出一个新的任务, 数据集解释。 鉴于两个图像数据集, 数据集解释旨在自动指出他们与自然语言的数据集水平分布变化。 监测当前分布变化的技术为了解数据集提供了不充分的信息, 目的是提高数据质量。 因此, 我们引入了 GGSCLIP, 一个无培训框架, 以解决数据集解释任务。 在 GSCLIP 中, 我们提议选择器作为第一个量化评估方法, 以确定适合汇总数据集变化的解释。 此外, 我们利用此选择器来显示基于语言模型生成的生成器的优势。 对自然数据转移的系统评估证实, GSCLIP, 一个混合生成器组合系统和高效选择器不仅容易使用, 而且在规模上对数据集解释也非常有力 。