In recent years, research on super-resolution has primarily focused on the development of unsupervised models, blind networks, and the use of optimization methods in non-blind models. But, limited research has discussed the loss function in the super-resolution process. The majority of those studies have only used perceptual similarity in a conventional way. This is while the development of appropriate loss can improve the quality of other methods as well. In this article, a new weighting method for pixel-wise loss is proposed. With the help of this method, it is possible to use trainable weights based on the general structure of the image and its perceptual features while maintaining the advantages of pixel-wise loss. Also, a criterion for comparing weights of loss is introduced so that the weights can be estimated directly by a convolutional neural network using this criterion. In addition, in this article, the expectation-maximization method is used for the simultaneous estimation super-resolution network and weighting network. In addition, a new activation function, called "FixedSum", is introduced which can keep the sum of all components of vector constants while keeping the output components between zero and one. As shown in the experimental results section, weighted loss by the proposed method leads to better results than the unweighted loss in both signal-to-noise and perceptual similarity senses.
翻译:近年来,超分辨率研究主要集中在开发无人监督的模型、盲人网络和在非盲人模型中使用优化方法。但是,有限的研究讨论了超分辨率进程中的损失功能。这些研究大多只是以传统的方式使用概念相似性。这就是在开发适当的损失可以提高其他方法的质量的同时,提出一种新的像素误算权重方法。在这种方法的帮助下,有可能使用基于图像总结构及其感知特征的可训练权重,同时保持像素误判损失的优点。此外,引入了比较损失权重的标准,以便利用这一标准直接由进化神经网络来估计重量。此外,在本条中,对超分辨率网络和权重网络同时进行估算时,采用了预期-最大度法。此外,还引入了一个新的激活功能,称为“FixedSum”,可以将所有损失权重组合保留成像素误判损失的优点。此外,还引入了一种标准,即比较损失权重的权重标准,以便由进化神经网络直接估算这些重量。此外,预期-峰值方法用于同时估计超分辨率网络和权重网络。此外,还引入了一种称为“FixedSum”的新的活化功能,可以将所有损失要素的组合组合的组合保留与信号总值,同时标值,而代号的导结果显示为惯值,而代号的导结果为零值。通过导结果显示为制的振损结果。