Despite unconditional feature inverters being the foundation of many synthesis tasks, training them requires a large computational overhead, decoding capacity or additional autoregressive priors. We propose to train an adversarially robust encoder to learn disentangled and perceptually-aligned bottleneck features, making them easily invertible. Then, by training a simple generator with the mirror architecture of the encoder, we achieve superior reconstructions and generalization over standard approaches. We exploit such properties using an encoding-decoding network based on AR features and demonstrate its oustanding performance on three applications: anomaly detection, style transfer and image denoising. Comparisons against alternative learn-based methods show that our model attains improved performance with significantly less training parameters.
翻译:尽管无条件的特征反转器是许多综合任务的基础,但培训它们需要大量的计算间接费用、解码能力或额外的自动递减前科。我们提议训练一个对抗性强的编码器,以学习分解的和感知的瓶颈特征,使其容易被忽略。然后,通过用编码器的镜像结构来训练一个简单的生成器,我们实现了优异的重建和普遍化,而不是标准方法。我们利用基于AR特性的编码-解码网络来利用这些特性,并展示其在三种应用上的超常性能:异常检测、风格转换和图像解析。与替代学习方法的比较表明,我们的模型在培训参数大大降低的情况下取得了更好的性能。