Since its invention, Generative adversarial networks (GANs) have shown outstanding results in many applications. Generative Adversarial Networks are powerful yet, resource-hungry deep-learning models. Their main difference from ordinary deep learning models is the nature of their output. For example, GAN output can be a whole image versus other models detecting objects or classifying images. Thus, the architecture and numeric precision of the network affect the quality and speed of the solution. Hence, accelerating GANs is pivotal. Accelerating GANs can be classified into three main tracks: (1) Memory compression, (2) Computation optimization, and (3) Data-flow optimization. Because data transfer is the main source of energy usage, memory compression leads to the most savings. Thus, in this paper, we survey memory compression techniques for CNN-Based GANs. Additionally, the paper summarizes opportunities and challenges in GANs acceleration and suggests open research problems to be further investigated.
翻译:自其发明以来,创世对抗网络(GANs)在许多应用中显示出了杰出的成果。创世反向网络是强大的,但资源饥饿的深层次学习模式。它们与普通深层次学习模式的主要区别在于其产出的性质。例如,GAN输出可以是完整的图像,也可以是探测对象或图像分类的其他模型。因此,网络的架构和数字精确度影响解决方案的质量和速度。因此,加速GANs至关重要。加速GANs可以分为三个主要轨道:(1)记忆压缩,(2)计算优化和(3)数据流优化。由于数据传输是能源使用的主要来源,记忆压缩可以节省最多。因此,我们在本文中调查CNNGANGANs的记忆压缩技术。此外,论文总结了GANs加速过程中的机会和挑战,并建议进一步调查公开研究问题。