A few-shot generative model should be able to generate data from a distribution by only observing a limited set of examples. In few-shot learning the model is trained on data from many sets from different distributions sharing some underlying properties such as sets of characters from different alphabets or sets of images of different type objects. We extend current latent variable models for sets to a fully hierarchical approach with an attention-based point to set-level aggregation and call our approach SCHA-VAE for Set-Context-Hierarchical-Aggregation Variational Autoencoder. We explore iterative data sampling, likelihood-based model comparison, and adaptation-free out of distribution generalization. Our results show that the hierarchical formulation better captures the intrinsic variability within the sets in the small data regime. With this work we generalize deep latent variable approaches to few-shot learning, taking a step towards large-scale few-shot generation with a formulation that readily can work with current state-of-the-art deep generative models.
翻译:微小的基因化模型应该能够从分布中生成数据,只观察有限的一组例子。在几个镜头中,该模型是用来自不同分布组的许多数据集的数据来培训的,这些数据集共享一些基本属性,例如来自不同字母的字符组或不同类型对象的图像组。我们将当前各组的潜伏变量模型推广到完全分级的方法,以基于关注的点为基础设定层次的聚合,并将我们的方法SCHA-VAE称为Set-text-Histrictic-Agrigic-Agrigical Vogenationder。我们探索了迭代数据取样、基于可能性的模型比较以及从分布通用中排除适应性。我们的结果表明,等级配方更好地捕捉了小数据系统中各组的内在变异性。我们通过这项工作将深潜潜伏变量方法推广到几个镜头的学习,向大片区生成一步,其配方能够与当前最先进的深层基因化模型合作。