Recently, Implicit Neural Representations (INRs) parameterized by neural networks have emerged as a powerful and promising tool to represent different kinds of signals due to its continuous, differentiable properties, showing superiorities to classical discretized representations. However, the training of neural networks for INRs only utilizes input-output pairs, and the derivatives of the target output with respect to the input, which can be accessed in some cases, are usually ignored. In this paper, we propose a training paradigm for INRs whose target output is image pixels, to encode image derivatives in addition to image values in the neural network. Specifically, we use finite differences to approximate image derivatives. We show how the training paradigm can be leveraged to solve typical INRs problems, i.e., image regression and inverse rendering, and demonstrate this training paradigm can improve the data-efficiency and generalization capabilities of INRs. The code of our method is available at \url{https://github.com/megvii-research/Sobolev_INRs}.
翻译:最近,神经网络参数的隐性神经图示(INRs)被神经网络参数作为代表不同信号的强大和有希望的工具出现,因为其具有连续的、可区别的特性,显示了古典的分立图示的优越性。然而,对神经网络的神经网络的培训只利用输入-输出对子,而目标输出的衍生物通常在有些情况下可以访问,但通常被忽略。在本文件中,我们提议为目标输出为图像像素的IRS提供一个培训模式,除了神经网络的图像值外,将图像衍生物编码。具体地说,我们使用有限的差异来估计图像衍生物。我们展示了如何利用培训模式来解决典型的IRS问题,即图像回归和反演化,并展示了这种培训模式能够提高IRS的数据效率和一般化能力。我们方法的代码可在以下surl{https://github.com/megvii-research/Sobolev_INRs}查阅。