Single-Image Super-Resolution can support robotic tasks in environments where a reliable visual stream is required to monitor the mission, handle teleoperation or study relevant visual details. In this work, we propose an efficient Generative Adversarial Network model for real-time Super-Resolution. We adopt a tailored architecture of the original SRGAN and model quantization to boost the execution on CPU and Edge TPU devices, achieving up to 200 fps inference. We further optimize our model by distilling its knowledge to a smaller version of the network and obtain remarkable improvements compared to the standard training approach. Our experiments show that our fast and lightweight model preserves considerably satisfying image quality compared to heavier state-of-the-art models. Finally, we conduct experiments on image transmission with bandwidth degradation to highlight the advantages of the proposed system for mobile robotic applications.
翻译:在需要可靠的视觉流来监测飞行任务、处理远程操作或研究相关视觉细节的环境中,单一图像超级分辨率可以支持机器人任务。在这项工作中,我们建议为实时超级分辨率提供一个高效的基因反转网络模型。我们采用了原SRGAN和模型量化的定制结构,以推进CPU和Edge TPU设备的执行,达到多达200英尺的推论。我们进一步优化了我们的模型,将它的知识浓缩到一个较小的网络版本,并取得了与标准培训方法相比的显著改进。我们的实验表明,我们的快速轻量模型保持了与较先进的模型相比相当令人满意的图像质量。最后,我们进行了带宽降解图像传输实验,以突出拟议的移动机器人应用系统的优势。