Zero-shot learning is the problem of predicting instances over classes not seen during training. One approach to zero-shot learning is providing auxiliary class information to the model. Prior work along this vein have largely used expensive per-instance annotation or singular class-level descriptions, but per-instance descriptions are hard to scale and single class descriptions may not be rich enough. Furthermore, these works have used natural-language descriptions exclusively, simple bi-encoders models, and modality or task-specific methods. These approaches have several limitations: text supervision may not always be available or optimal and bi-encoders may only learn coarse relations between inputs and class descriptions. In this work, we present SemSup, a novel approach that uses (1) a scalable multiple description sampling method which improves performance over single descriptions, (2) alternative description formats such as JSON that are easy to generate and outperform text on certain settings, and (3) hybrid lexical-semantic similarity to leverage fine-grained information in class descriptions. We demonstrate the effectiveness of SemSup across four datasets, two modalities, and three generalization settings. For example, across text and image datasets, SemSup increases unseen class generalization accuracy by 15 points on average compared to the closest baseline.
翻译:零点学习是预测培训期间看不到的班级的事例的问题。 零点学习的一种方法是向模型提供辅助类信息。 先前的类似工作基本上使用了昂贵的单份批注或单类级说明, 但单份说明很难衡量, 单类说明可能不够丰富。 此外, 这些工程只使用自然语言描述, 简单的双相模型, 以及模式或任务特定方法。 这些方法有一些局限性: 文本监督可能并不总是可用, 或最佳的, 双相校者可能只学习投入和班级说明之间的粗略关系。 在这项工作中, 我们介绍SemSup, 这是一种新颖的方法, 使用可缩放的多描述抽样方法, 改进单份说明的性能, (2) 替代描述格式, 如Jsonson, 在某些环境中易于生成和超出文本的文本, 以及混合的词汇相似性, 以利用细化的班级描述信息。 我们展示了SemSup在四个数据集、两个模式和三个普通化结构之间的有效性。 我们展示了SemSemSupalalass lagest sal sq lade lade lax lax a ex ex ex ex ex ex ex ex vicalticalticalse 15 vicalse agild dasset viewal sems.