Autoencoders are able to learn useful data representations in an unsupervised matter and have been widely used in various machine learning and computer vision tasks. In this work, we present methods to train Invertible Neural Networks (INNs) as (variational) autoencoders which we call INN (variational) autoencoders. Our experiments on MNIST, CIFAR and CelebA show that for low bottleneck sizes our INN autoencoder achieves results similar to the classical autoencoder. However, for large bottleneck sizes our INN autoencoder outperforms its classical counterpart. Based on the empirical results, we hypothesize that INN autoencoders might not have any intrinsic information loss and thereby are not bounded to a maximal number of layers (depth) after which only suboptimal results can be achieved.
翻译:自编码器能够以无监督的方式学习有用的数据表示,并已广泛用于各种机器学习和计算机视觉任务中。在这项工作中,我们提出了训练可逆神经网络 (INN) 作为 (变分) 自编码器的方法,称其为 INN (变分) 自编码器。我们在 MNIST、CIFAR 和 CelebA 上的实验表明,对于低瓶颈大小,我们的 INN 自编码器实现了类似于传统自编码器的结果。然而,对于大的瓶颈大小,我们的 INN 自编码器优于传统自编码器。基于实证结果,我们假设 INN 自编码器可能没有任何内在的信息损失,因此不受最大层数(深度)的限制,在此之后只能实现次优结果。