Many applications of deep learning for image generation use perceptual losses for either training or fine-tuning of the generator networks. The use of perceptual loss however incurs repeated forward-backward passes in a large image classification network as well as a considerable memory overhead required to store the activations of this network. It is therefore desirable or sometimes even critical to get rid of these overheads. In this work, we propose a way to train generator networks using approximations of perceptual loss that are computed without forward-backward passes. Instead, we use a simpler perceptual gradient network that directly synthesizes the gradient field of a perceptual loss. We introduce the concept of proxy targets, which stabilize the predicted gradient, meaning that learning with it does not lead to divergence or oscillations. In addition, our method allows interpretation of the predicted gradient, providing insight into the internals of perceptual loss and suggesting potential ways to improve it in future work.
翻译:为图像生成而进行深层次学习的许多应用都利用概念损失来培训或微调发电机网络。但是,在大型图像分类网络中,使用概念损失会造成多次前向后向传递,以及存储网络启动所需的大量内存间接费用。因此,要消除这些间接费用是可取的,有时甚至是关键的。在这项工作中,我们建议了一种方法来培训发电机网络,使用在没有前向后向移动的情况下计算的概念损失近似值。相反,我们采用了一种更简单的概念性梯度网络,直接合成概念损失的梯度域。我们引入了代用目标的概念,以稳定预测的梯度,这意味着用它学习不会导致差异或振荡。此外,我们的方法可以解释预测的梯度,对概念损失的内部进行深入了解,并提出在未来工作中改进它的潜在方法。