Meta-learning becomes a practical approach towards few-shot image classification, where a visual recognition system is constructed with limited annotated data. Inductive bias such as embedding is learned from a base class set with ample labeled examples and then generalizes to few-shot tasks with novel classes. Surprisingly, we find that the base class set labels are not necessary, and discriminative embeddings could be meta-learned in an unsupervised manner. Comprehensive analyses indicate two modifications -- the semi-normalized distance metric and the sufficient sampling -- improves unsupervised meta-learning (UML) significantly. Based on the modified baseline, we further amplify or compensate for the characteristic of tasks when training a UML model. First, mixed embeddings are incorporated to increase the difficulty of few-shot tasks. Next, we utilize a task-specific embedding transformation to deal with the specific properties among tasks, maintaining the generalization ability into the vanilla embeddings. Experiments on few-shot learning benchmarks verify that our approaches outperform previous UML methods by a 4-10% performance gap, and embeddings learned with our UML achieve comparable or even better performance than its supervised variants.
翻译:元化学习成为少发图像分类的实用方法,即以有限的附加说明数据构建视觉识别系统。嵌入等诱导偏差,如嵌入从带有大量标签实例的基级中学习,然后推广到新类的少发任务。令人惊讶的是,我们发现基级标签没有必要,歧视性嵌入可以不经监督的方式元化。全面分析显示两个修改 -- -- 半正常距离测量和足够取样 -- -- 显著改进了未经监督的元学习方法(UML)。根据修改后的基线,我们在培训UML模型时,进一步扩大或补偿任务的特点。首先,混合嵌入会增加少发任务的困难。接下来,我们利用特定任务嵌入的嵌入转换来处理任务中的具体属性,在香草嵌入中保持一般化能力。对少量的学习基准进行的实验证实,我们的方法通过4-10%的性能差距超越了以往的UML方法,而我们学习到的UML变异性比监督得更好。