A few-shot generative model should be able to generate data from a novel distribution by only observing a limited set of examples. In few-shot learning the model is trained on data from many sets from distributions sharing some underlying properties such as sets of characters from different alphabets or objects from different categories. We extend current latent variable models for sets to a fully hierarchical approach with an attention-based point to set-level aggregation and call our method SCHA-VAE for Set-Context-Hierarchical-Aggregation Variational Autoencoder. We explore likelihood-based model comparison, iterative data sampling, and adaptation-free out-of-distribution generalization. Our results show that the hierarchical formulation better captures the intrinsic variability within the sets in the small data regime. This work generalizes deep latent variable approaches to few-shot learning, taking a step toward large-scale few-shot generation with a formulation that readily works with current state-of-the-art deep generative models.
翻译:微小的基因变异模型应该能够通过新分发方式生成数据,只观察有限的一组例子。 在微小的学习中,该模型就来自共享某些基本属性的分布式数据进行了培训,例如来自不同字母或不同类别对象的字符组。我们将当前各组的潜伏变异模型扩展为完全等级化的模型,并有一个基于关注的点,以设定层次聚合,并将我们的方法SCHA-VAE称为Set-Ctext-Hortical-Agrigical Voencoder。我们探讨了基于可能性的模型比较、迭代数据抽样和不适应的分布式通用。我们的结果显示,等级配方更好地捕捉了小数据系统中各组的内在变异性。这项工作将深潜伏变异性方法概括为几张式的学习,向大范围的少发一代迈出一步,其配方能够与当前最先进的深基因模型合作。