Many applications such as forensics, surveillance, satellite imaging, medical imaging, etc., demand High-Resolution (HR) images. However, obtaining an HR image is not always possible due to the limitations of optical sensors and their costs. An alternative solution called Single Image Super-Resolution (SISR) is a software-driven approach that aims to take a Low-Resolution (LR) image and obtain the HR image. Most supervised SISR solutions use ground truth HR image as a target and do not include the information provided in the LR image, which could be valuable. In this work, we introduce Triplet Loss-based Generative Adversarial Network hereafter referred as SRTGAN for Image Super-Resolution problem on real-world degradation. We introduce a new triplet-based adversarial loss function that exploits the information provided in the LR image by using it as a negative sample. Allowing the patch-based discriminator with access to both HR and LR images optimizes to better differentiate between HR and LR images; hence, improving the adversary. Further, we propose to fuse the adversarial loss, content loss, perceptual loss, and quality loss to obtain Super-Resolution (SR) image with high perceptual fidelity. We validate the superior performance of the proposed method over the other existing methods on the RealSR dataset in terms of quantitative and qualitative metrics.
翻译:许多应用软件,如法证、监视、卫星成像、医学成像等,都要求高分辨率图像。然而,由于光学传感器的局限性及其成本,获取HR图像并非总有可能。一种名为单一图像超级分辨率(SISR)的替代解决方案是一种软件驱动的方法,目的是将低分辨率图像(LR)作为软件驱动的方法,并获得HR图像。大多数受监督的SISR解决方案将地面真象HR图像作为目标,而不包括LR图像中可能有价值的信息。在这项工作中,我们引入了基于Tripl Levle Leving General Adversarial网络(以下简称图像超分辨率超分辨率问题SRTGAN),在真实世界的退化问题上,我们引入了一个新的基于三重基对抗性对抗性损失功能,即利用LR图像中提供的信息,将其作为负面样本。允许基于补基的歧视者最佳地获取HR和LR图像,以更好地区分HR和LR图像;因此,改进对手。此外,我们提议将对抗性损失、内容损失、感官损失和质量损失与现实世界的高级标准验证方法相结合。我们提议采用目前关于SSR的高级标准。