In this paper, we introduce a novel implicit neural network for the task of single image super-resolution at arbitrary scale factors. To do this, we represent an image as a decoding function that maps locations in the image along with their associated features to their reciprocal pixel attributes. Since the pixel locations are continuous in this representation, our method can refer to any location in an image of varying resolution. To retrieve an image of a particular resolution, we apply a decoding function to a grid of locations each of which refers to the center of a pixel in the output image. In contrast to other techniques, our dual interactive neural network decouples content and positional features. As a result, we obtain a fully implicit representation of the image that solves the super-resolution problem at (real-valued) elective scales using a single model. We demonstrate the efficacy and flexibility of our approach against the state of the art on publicly available benchmark datasets.
翻译:在本文中,我们引入了一个新的隐含神经网络, 用于在任意比例因素下执行单一图像超分辨率的任务。 为此, 我们代表了图像的解码功能, 将图像中的位置与对应的像素属性相匹配。 由于像素位置是连续的, 我们的方法可以指不同分辨率的图像中的任何位置。 为了检索特定分辨率的图像, 我们将解码功能应用到一个位置的网格中, 其中每个位置都指输出图像中像素的中心。 与其他技术不同, 我们的双重交互神经网络解码内容和位置特征。 结果, 我们获得一个单一模型, 完全隐含的图像, 解决( 实际价值的) 选择尺度上的超级分辨率问题。 我们用一个模型, 展示了我们的方法相对于可公开使用的基准数据集的艺术状态的有效性和灵活性 。