Semantic concepts are frequently defined by combinations of underlying attributes. As mappings from attributes to classes are often simple, attribute-based representations facilitate novel concept learning with zero or few examples. A significant limitation of existing attribute-based learning paradigms, such as zero-shot learning, is that the attributes are assumed to be known and fixed. In this work we study the rapid learning of attributes that were not previously labeled. Compared to standard few-shot learning of semantic classes, in which novel classes may be defined by attributes that were relevant at training time, learning new attributes imposes a stiffer challenge. We found that supervised learning with training attributes does not generalize well to new test attributes, whereas self-supervised pre-training brings significant improvement. We further experimented with random splits of the attribute space and found that predictability of test attributes provides an informative estimate of a model's generalization ability.
翻译:语义概念往往由基本属性的组合来界定。由于从属性到等级的绘图往往是简单的,基于属性的表示方式便于以零或少数例子进行新概念的学习。现有的基于属性的学习模式,例如零光学习,一个重大的局限性是假定属性是已知的和固定的。在这项工作中,我们研究对以前没有标记的属性的快速学习。与标准微小的语义类的学习相比,新类可以按照培训时的相关属性来界定,学习新的属性带来了更严峻的挑战。我们发现,有培训属性的受监督的学习没有很好地概括新的测试属性,而自我监督的训练前学习则带来显著的改进。我们进一步进行了属性空间随机分割的实验,发现测试属性的可预测性为模型的通用能力提供了信息性估计。