Established methods for unsupervised representation learning such as variational autoencoders produce none or poorly calibrated uncertainty estimates making it difficult to evaluate if learned representations are stable and reliable. In this work, we present a Bayesian autoencoder for unsupervised representation learning, which is trained using a novel variational lower-bound of the autoencoder evidence. This is maximized using Monte Carlo EM with a variational distribution that takes the shape of a Laplace approximation. We develop a new Hessian approximation that scales linearly with data size allowing us to model high-dimensional data. Empirically, we show that our Laplacian autoencoder estimates well-calibrated uncertainties in both latent and output space. We demonstrate that this results in improved performance across a multitude of downstream tasks.
翻译:在这项工作中,我们展示了一种未经监督的代言学习方法,如变式自动代言人,没有产生任何或校准差强的不确定性估计,因此很难评价所学的代言人是否稳定和可靠。在这项工作中,我们展示了一种用于未经监督的代言人自动代言人学习的巴伊西亚代言人,该代言人培训使用了一种新型的变式低限制的自动代言人证据。这是利用蒙特卡洛 EM(Monte Carlo EM) 和一种变式分布,以拉普尔近似为形状进行最大化的。我们开发了一种新的赫西安近似值,以数据大小为直线缩缩缩,以使我们能够模拟高维数据。我们生动地表明,我们的Laplacian自动代言人自动代言人对潜在空间和产出空间的不确定性都进行了很好的校准。我们证明,这可以改善许多下游任务的业绩。