Generative DNNs are a powerful tool for image synthesis, but they are limited by their computational load. On the other hand, given a trained model and a task, e.g. faces generation within a range of characteristics, the output image quality will be unevenly distributed among images with different characteristics. It follows, that we might restrain the models complexity on some instances, maintaining a high quality. We propose a method for diminishing computations by adding so-called early exit branches to the original architecture, and dynamically switching the computational path depending on how difficult it will be to render the output. We apply our method on two different SOTA models performing generative tasks: generation from a semantic map, and cross-reenactment of face expressions; showing it is able to output images with custom lower-quality thresholds. For a threshold of LPIPS <=0.1, we diminish their computations by up to a half. This is especially relevant for real-time applications such as synthesis of faces, when quality loss needs to be contained, but most of the inputs need fewer computations than the complex instances.
翻译:生成性深度神经网络是图像合成的有力工具,但它们受到计算负担的限制。然而,对于已训练的模型和任务,例如在一定特征范围内生成人脸,不同特征的图像的输出质量将在不同程度上分布。因此,我们可以为某些实例限制模型的复杂性,从而保持高质量。我们提出了一种方法,通过在原始架构中添加所谓的早期退出分支并根据渲染输出的难易程度动态地切换计算路径,来减少计算负担。我们将其应用于两种不同的最先进生成任务的模型:从语义映射生成和面部表情交叉重演,展示了该方法能够以定制的较低质量阈值输出图像。对于LPIPS<=0.1的阈值,我们将它们的计算负担降低了高达一半。这对于实时应用程序如人脸合成特别重要,需要控制质量损失,但大多数输入需要的计算负担低于复杂实例。