Zero-shot learning is the problem of predicting instances over classes not seen during training. One approach to zero-shot learning is providing auxiliary class information to the model. Prior works along this vein have largely used expensive per-instance annotation or singular class-level descriptions, but per-instance descriptions are hard to scale and single class descriptions may not be rich enough. Furthermore, these works have used natural-language descriptions exclusively, simple biencoders models, and modality or task specific methods. These approaches have several limitations: text supervision may not always be available or optimal and biencoders may only learn coarse relations between inputs and class descriptions. In this work, we present SemSup, a novel approach that uses (1) a scalable multiple description sampling method which improves performance over single descriptions, (2) alternative description formats such as JSON that are easy to generate and outperform text on certain settings, and (3) hybrid lexical-semantic similarity to leverage fine-grained information in class descriptions. We demonstrate the effectiveness of SemSup across four datasets, two modalities, and three generalization settings. For example, across text and image datasets, SemSup increases unseen class generalization accuracy by 15 points on average compared to the closest baseline.
翻译:零点学习是预测培训期间看不到的班级的事例的问题。 零点学习的一种方法是向模型提供辅助类信息。 先前的类似工作基本上使用了昂贵的单份批注或单类级描述, 但单类描述很难衡量, 单类描述可能不够丰富。 此外, 这些工程只使用自然语言描述、 简单的双相模型、 模式或任务特定方法。 这些方法有若干限制: 文本监督可能并不总是可用或最佳, 双分解器可能只学习输入和类描述之间的粗糙关系。 在这项工作中, 我们介绍SemSup, 一种新颖的方法, 使用:(1) 可缩放的多个描述抽样方法, 改进单类描述的性能, (2) 替代描述格式, 如Jsons, 在某些环境中容易生成和超完善文本的文本, 以及(3) 混合的词汇性相似性, 以利用类描述中的精细信息。 我们展示了SemSupt在四个数据集、 两个模式和三个一般化环境中的效能。 我们展示了SemSemSuptalalization, 比较了最接近的文本和图像的精确度。