Learning from set-structured data is a fundamental problem that has recently attracted increasing attention, where a series of summary networks are introduced to deal with the set input. In fact, many meta-learning problems can be treated as set-input tasks. Most existing summary networks aim to design different architectures for the input set in order to enforce permutation invariance. However, scant attention has been paid to the common cases where different sets in a meta-distribution are closely related and share certain statistical properties. Viewing each set as a distribution over a set of global prototypes, this paper provides a novel prototype-oriented optimal transport (POT) framework to improve existing summary networks. To learn the distribution over the global prototypes, we minimize its regularized optimal transport distance to the set empirical distribution over data points, providing a natural unsupervised way to improve the summary network. Since our plug-and-play framework can be applied to many meta-learning problems, we further instantiate it to the cases of few-shot classification and implicit meta generative modeling. Extensive experiments demonstrate that our framework significantly improves the existing summary networks on learning more powerful summary statistics from sets and can be successfully integrated into metric-based few-shot classification and generative modeling applications, providing a promising tool for addressing set-input and meta-learning problems.
翻译:从固定结构数据中学习是最近日益引起注意的一个根本问题,其中采用了一系列摘要网络来处理固定投入。事实上,许多元学习问题可以被当作设定投入任务处理。大多数现有摘要网络的目的是设计不同的输入结构结构,以便实施变异。然而,很少注意元数据分配中不同数据集密切相关并分享某些统计属性的常见情况。将每套数据集作为一套全球原型的分布,本文提供了一个创新的面向原型的最佳运输框架(POT),以改善现有的摘要网络。为了了解全球原型的分布情况,我们尽量减少其固定化的最佳传输距离,以便在数据点上按经验分配,提供一个自然的、不受监督的方法来改进汇总网络。由于我们的插接和播放框架可以应用于许多元学习问题,我们进一步将它用于几集不尽的分类和隐含的元化模型。广泛实验表明,我们的框架大大改进了现有的关于学习更强有力的摘要统计的模型,从成套的和元数据化应用中学习,可以成功地将一些具有前瞻性的模型和元化的模型纳入到能够成功地综合的模型。</s>