Autoencoders gained popularity in the deep learning revolution given their ability to compress data and provide dimensionality reduction. Although prominent deep learning methods have been used to enhance autoencoders, the need to provide robust uncertainty quantification remains a challenge. This has been addressed with variational autoencoders so far. Bayesian inference via Markov Chain Monte Carlo (MCMC) sampling has faced several limitations for large models; however, recent advances in parallel computing and advanced proposal schemes have opened routes less traveled. This paper presents Bayesian autoencoders powered by MCMC sampling implemented using parallel computing and Langevin-gradient proposal distribution. The results indicate that the proposed Bayesian autoencoder provides similar performance accuracy when compared to related methods in the literature. Furthermore, it provides uncertainty quantification in the reduced data representation. This motivates further applications of the Bayesian autoencoder framework for other deep learning models.
翻译:在深层学习革命中,自动编码器因其压缩数据和提供维度减少的能力而获得受欢迎程度。虽然已经使用了显著的深层学习方法来增强自动编码器,但提供稳健的不确定性量化仍然是一项挑战。迄今为止,通过变式自动编码器已经解决了这一问题。通过Markov Caincle Monte Carlo(MCMC)抽样进行的巴伊西亚推断在大型模型方面受到若干限制;然而,在平行计算和先进提案方案方面最近取得的进展开辟了较少的路线。本文介绍了利用平行计算和Langevin-gradient 提案分布方式实施的MC取样使Bayesian自动编码器具有动力的情况。结果显示,拟议的Bayesian自动编码器与文献中的相关方法相比,提供了类似的性能准确性。此外,它还为减少的数据表述提供了不确定性的量化。这激励了Bayesian自动编码器框架对其他深层学习模型的进一步应用。