Variational autoencoders (VAEs) suffer from posterior collapse, where the powerful neural networks used for modeling and inference optimize the objective without meaningfully using the latent representation. We introduce inference critics that detect and incentivize against posterior collapse by requiring correspondence between latent variables and the observations. By connecting the critic's objective to the literature in self-supervised contrastive representation learning, we show both theoretically and empirically that optimizing inference critics increases the mutual information between observations and latents, mitigating posterior collapse. This approach is straightforward to implement and requires significantly less training time than prior methods, yet obtains competitive results on three established datasets. Overall, the approach lays the foundation to bridge the previously disconnected frameworks of contrastive learning and probabilistic modeling with variational autoencoders, underscoring the benefits both communities may find at their intersection.
翻译:变化式自动代言人(VAEs)在事后崩溃中遭受了变化式自动代言人(VAEs)的困扰,在那里,用于建模和推断的强大神经网络在不实际使用潜在代表的情况下优化了目标。我们引入了通过要求潜伏变量和观察之间的对等来探测和激励后代崩溃的推论家。通过将评论家的目标与自我监督的对比代言学习中的文献联系起来,我们从理论上和经验上表明,优化推论家增加了观测和潜伏之间的相互信息,减轻了后代人崩溃。这种方法可以直接实施,比以前的方法要少得多的培训时间,但是在三个既定的数据集中取得了竞争性结果。 总体而言,该方法为将以前互不相连的对比学习和概率模型框架与变式自动代言人连接起来奠定了基础,强调了两个社区在交汇上可能获得的好处。