Neural Networks play a growing role in many science disciplines, including physics. Variational Autoencoders (VAEs) are neural networks that are able to represent the essential information of a high dimensional data set in a low dimensional latent space, which have a probabilistic interpretation. In particular the so-called encoder network, the first part of the VAE, which maps its input onto a position in latent space, additionally provides uncertainty information in terms of a variance around this position. In this work, an extension to the Autoencoder architecture is introduced, the FisherNet. In this architecture, the latent space uncertainty is not generated using an additional information channel in the encoder, but derived from the decoder, by means of the Fisher information metric. This architecture has advantages from a theoretical point of view as it provides a direct uncertainty quantification derived from the model, and also accounts for uncertainty cross-correlations. We can show experimentally that the FisherNet produces more accurate data reconstructions than a comparable VAE and its learning performance also apparently scales better with the number of latent space dimensions.
翻译:神经网络在许多学科,包括物理学科中发挥着日益重要的作用。变化式自动编码器(VAE)是神经网络,能够代表低维潜层空间中高维数据集的基本信息,具有概率性解释。特别是所谓的编码器网络,即VAE的第一部分,将输入映射到潜伏空间的位置,还从这一位置的差异中提供了不确定信息。在这项工作中,引入了Autoencoder结构的扩展,FisherNet。在这个结构中,潜在的空间不确定性不是利用编码器中的额外信息渠道产生的,而是通过Fisher信息指标从解码器中产生的。这一结构从理论上看具有优势,因为它提供了从模型中得出的直接不确定性量化,同时也说明了不确定性的交叉关系。我们可以实验性地表明,FisherNet比可比VAE结构及其学习性表现还比潜在空间层面的数量更精确。