Reversibility in artificial neural networks allows us to retrieve the input given an output. We present feature alignment, a method for approximating reversibility in arbitrary neural networks. We train a network by minimizing the distance between the output of a data point and the random output with respect to a random input. We applied the technique to the MNIST, CIFAR-10, CelebA and STL-10 image datasets. We demonstrate that this method can roughly recover images from just their latent representation without the need of a decoder. By utilizing the formulation of variational autoencoders, we demonstrate that it is possible to produce new images that are statistically comparable to the training data. Furthermore, we demonstrate that the quality of the images can be improved by coupling a generator and a discriminator together. In addition, we show how this method, with a few minor modifications, can be used to train networks locally, which has the potential to save computational memory resources.
翻译:在人工神经网络中, 人造神经网络的可逆性允许我们检索给定输出的输入。 我们展示特征对齐, 这是任意神经网络中近似可逆性的一种方法。 我们通过将数据点输出与随机输入随机输出之间的距离最小化来培训网络。 我们将该技术应用到 MNIST、 CIFAR- 10、 CelibA 和 STL- 10 图像数据集中。 我们证明, 这种方法可以在不需要解码器的情况下, 大致上从图像的潜影中恢复图像。 我们通过使用变异自动编码器的配制, 证明有可能生成与培训数据相仿的新图像。 此外, 我们证明, 将生成器和制导出器结合起来可以提高图像的质量。 此外, 我们展示了如何使用这种方法, 只要稍作一些小的修改, 就可以在本地培训网络, 这有可能保存计算记忆资源 。