Representation learning has become a practical family of methods for building rich parametric codifications of massive high-dimensional data while succeeding in the reconstruction side. When considering unsupervised tasks with test-train distribution shifts, the probabilistic viewpoint helps for addressing overconfidence and poor calibration of predictions. However, the direct introduction of Bayesian inference on top of neural networks weights is still an ardous problem for multiple reasons, i.e. the curse of dimensionality or intractability issues. The Laplace approximation (LA) offers a solution here, as one may build Gaussian approximations of the posterior density of weights via second-order Taylor expansions in certain locations of the parameter space. In this work, we present a Bayesian autoencoder for unsupervised representation learning inspired in LA. Our method implements iterative Laplace updates to obtain a novel variational lower-bound of the autoencoder evidence. The vast computational burden of the second-order partial derivatives is skipped via approximations of the Hessian matrix. Empirically, we demonstrate the scalability and performance of the Laplacian autoencoder by providing well-calibrated uncertainties for out-of-distribution detection, geodesics for differential geometry and missing data imputations.
翻译:代表制学习已成为在重建方面成功建立大规模高维数据的丰富参数编码方法的实用系列。 当考虑测试-测试分布变化的不受监督任务时, 概率观点有助于解决过度自信和预测校准差的问题。 但是, 在神经网络重量的顶部直接引入巴伊西亚的推断仍然是一个非常大的问题, 原因很多, 即: 维度的诅咒或易感性问题。 Laplace 近似(LA) 提供了一种解决方案, 因为人们可以通过参数空间某些位置的第二阶梯扩展建立高斯近似重量的后方密度。 在这项工作中, 我们展示了一种巴伊西亚自动编码器, 用于在LA 启发下进行不受监督的代言学习。 我们的方法使用迭代拉比更新, 以获得新的变异性更低的自动电解码证据。 第二阶部分衍生物的巨大计算负担通过赫森矩阵的近似值来抵消, 因为人们可能会通过第二阶基质矩阵的近似值来建立高斯近距离的近距离近距离近距离近近, 我们展示了地理测量的不稳定性, 。