Autoencoders are able to learn useful data representations in an unsupervised matter and have been widely used in various machine learning and computer vision tasks. In this work, we present methods to train Invertible Neural Networks (INNs) as (variational) autoencoders which we call INN (variational) autoencoders. Our experiments on MNIST, CIFAR and CelebA show that for low bottleneck sizes our INN autoencoder achieves results similar to the classical autoencoder. However, for large bottleneck sizes our INN autoencoder outperforms its classical counterpart. Based on the empirical results, we hypothesize that INN autoencoders might not have any intrinsic information loss and thereby are not bounded to a maximal number of layers (depth) after which only suboptimal results can be achieved.
翻译:自编码器能够以无监督的方式学习有用的数据表示,并已在各种机器学习和计算机视觉任务中广泛使用。在这项工作中,我们提出了将可逆神经网络(INN)作为(变分)自编码器进行训练的方法,我们称之为INN(变分)自编码器。我们在MNIST、CIFAR和CelebA上的实验表明,对于低瓶颈大小,我们的INN自编码器实现了类似于传统自编码器的结果。然而,对于大瓶颈大小,我们的INN自编码器优于其传统对应物。基于经验结果,我们假设INN自编码器可能没有任何内在信息丢失,因此不受最大层数(深度)的限制,之后只能获得次优结果。