Generative modeling of set-structured data, such as point clouds, requires reasoning over local and global structures at various scales. However, adopting multi-scale frameworks for ordinary sequential data to a set-structured data is nontrivial as it should be invariant to the permutation of its elements. In this paper, we propose SetVAE, a hierarchical variational autoencoder for sets. Motivated by recent progress in set encoding, we build SetVAE upon attentive modules that first partition the set and project the partition back to the original cardinality. Exploiting this module, our hierarchical VAE learns latent variables at multiple scales, capturing coarse-to-fine dependency of the set elements while achieving permutation invariance. We evaluate our model on point cloud generation task and achieve competitive performance to the prior arts with substantially smaller model capacity. We qualitatively demonstrate that our model generalizes to unseen set sizes and learns interesting subset relations without supervision. Our implementation is available at https://github.com/jw9730/setvae.
翻译:生成像点云这样的固定结构数据模型, 需要在各种尺度上对本地和全球结构进行推理。 但是, 将普通序列数据多尺度框架对固定结构数据采用多尺度框架是非边际的, 因为它应该对各元素的变异性有不同作用 。 在本文中, 我们提议SetVAE, 是一个各组的等级变异自动编码器 。 受最近设定编码进展的驱动, 我们建起SetVAE, 以关注的模块为基础, 首先将数据集分割成一组, 并将分区投向原始的基点 。 开发这个模块, 我们的等级 VAE 在多个尺度上学习潜伏变量, 捕捉集集集元集元元素的粗到线依赖性, 同时实现变异性。 我们评估了点云生成任务的模式, 并实现前文的竞争性性能, 模型容量要小得多。 我们定性地证明, 我们的模型一般化为看不见的设定大小, 并且不经过监督地学习有趣的子关系 。 我们的功能可以在 http://github.com/jw9730/ setvaevevae 上查阅 。