Multimodal meta-learning is a recent problem that extends conventional few-shot meta-learning by generalizing its setup to diverse multimodal task distributions. This setup makes a step towards mimicking how humans make use of a diverse set of prior skills to learn new skills. Previous work has achieved encouraging performance. In particular, in spite of the diversity of the multimodal tasks, previous work claims that a single meta-learner trained on a multimodal distribution can sometimes outperform multiple specialized meta-learners trained on individual unimodal distributions. The improvement is attributed to knowledge transfer between different modes of task distributions. However, there is no deep investigation to verify and understand the knowledge transfer between multimodal tasks. Our work makes two contributions to multimodal meta-learning. First, we propose a method to quantify knowledge transfer between tasks of different modes at a micro-level. Our quantitative, task-level analysis is inspired by the recent transference idea from multi-task learning. Second, inspired by hard parameter sharing in multi-task learning and a new interpretation of related work, we propose a new multimodal meta-learner that outperforms existing work by considerable margins. While the major focus is on multimodal meta-learning, our work also attempts to shed light on task interaction in conventional meta-learning. The code for this project is available at https://miladabd.github.io/KML.
翻译:现代的元学习是最近出现的一个问题,它通过将传统少见的元学习推广到多种多式联运任务分配方式,扩大了常规的少见元学习的范围。这种设置是朝着模仿人类如何利用多种先前技能来学习新技能迈出的一步。以前的工作已经取得了令人鼓舞的成绩。特别是,尽管多式联运任务多种多样,以前的工作声称,在多式联运分配方面受过培训的单一元学习者有时能够超越在个人单式分配方面受过培训的多专业元学习者。改进是由于不同任务分配方式之间的知识转让。然而,没有进行深入的调查来核查和理解多式任务之间的知识转让。我们的工作为多种模式的元学习作出了两项贡献。首先,我们提出了在微观一级量化不同模式任务之间知识转让的方法。我们的数据、任务级别分析受到最近多式学习转移概念的启发。第二,由于在多式任务分配方式的学习中分享硬参数和对相关工作的新解释,我们提议在新的多式联运任务分配方式之间进行新的多式元学习。我们的工作为现有Mexmodeal-modeal工作提供了新的尝试。