Many few-shot learning approaches have been designed under the meta-learning framework, which learns from a variety of learning tasks and generalizes to new tasks. These meta-learning approaches achieve the expected performance in the scenario where all samples are drawn from the same distributions (i.i.d. observations). However, in real-world applications, few-shot learning paradigm often suffers from data shift, i.e., samples in different tasks, even in the same task, could be drawn from various data distributions. Most existing few-shot learning approaches are not designed with the consideration of data shift, and thus show downgraded performance when data distribution shifts. However, it is non-trivial to address the data shift problem in few-shot learning, due to the limited number of labeled samples in each task. Targeting at addressing this problem, we propose a novel metric-based meta-learning framework to extract task-specific representations and task-shared representations with the help of knowledge graph. The data shift within/between tasks can thus be combated by the combination of task-shared and task-specific representations. The proposed model is evaluated on popular benchmarks and two constructed new challenging datasets. The evaluation results demonstrate its remarkable performance.
翻译:在元学习框架下设计了许多少见的学习方法,这些少见的学习方法是在元学习框架内设计的,从各种学习任务中学习,从一般到新任务,这些元学习方法在从相同分布(即观察)中提取所有样本的情景中实现预期业绩;然而,在现实世界应用中,少见的学习模式往往由于数据的变化而受到影响,即不同任务的样本,即使在同一任务中,也可以从各种数据分布中提取,大多数现有的少见学习方法不是在考虑数据变化的情况下设计的,因此在数据分配变化时显示较低的业绩。然而,由于每项任务中贴有标签的样本数量有限,因此在少见的学习中解决数据转移问题并非三边之道。我们提出了一个新的基于标准的元学习框架,以利用知识图表来提取具体任务的表现和任务分担的表述。因此,任务内部/任务之间的数据变化可以通过任务分担和任务具体表现的组合加以抵消。拟议的模型是按大众基准和两个具有挑战性的数据组合加以评价。