We propose a transductive Laplacian-regularized inference for few-shot tasks. Given any feature embedding learned from the base classes, we minimize a quadratic binary-assignment function containing two terms: (1) a unary term assigning query samples to the nearest class prototype, and (2) a pairwise Laplacian term encouraging nearby query samples to have consistent label assignments. Our transductive inference does not re-train the base model, and can be viewed as a graph clustering of the query set, subject to supervision constraints from the support set. We derive a computationally efficient bound optimizer of a relaxation of our function, which computes independent (parallel) updates for each query sample, while guaranteeing convergence. Following a simple cross-entropy training on the base classes, and without complex meta-learning strategies, we conducted comprehensive experiments over five few-shot learning benchmarks. Our LaplacianShot consistently outperforms state-of-the-art methods by significant margins across different models, settings, and data sets. Furthermore, our transductive inference is very fast, with computational times that are close to inductive inference, and can be used for large-scale few-shot tasks.
翻译:我们建议对少量任务进行感性拉平板常规推断。 鉴于从基类中学习的任何特性嵌入, 我们最大限度地减少一个包含两个术语的二次二次任务功能:(1) 一个单词, 将查询样本分配到最近的类原型, (2) 一个双词, 拉平板词, 鼓励附近的查询样本有一致的标签任务。 我们的感性推断不会对基准模型进行再培训, 并且可以被视为一组查询的图形组合, 受支持组的监督限制。 我们从一个计算上高效的组合中获取我们功能放松的优化, 它计算出每个查询样本的独立( 平行) 更新, 同时保证趋同。 在基础班上进行简单的交叉作物培训之后, 没有复杂的元化战略, 我们对五个点数的学习基准进行了全面实验。 我们的感应感应感应感应源始终以不同模型、 环境 和 数据集 的显著边距, 我们的感应速度非常快, 我们的感应速度非常快, 且计算时间非常近于感应, 。