This paper proposes a regularizer called Implicit Neural Representation Regularizer (INRR) to improve the generalization ability of the Implicit Neural Representation (INR). The INR is a fully connected network that can represent signals with details not restricted by grid resolution. However, its generalization ability could be improved, especially with non-uniformly sampled data. The proposed INRR is based on learned Dirichlet Energy (DE) that measures similarities between rows/columns of the matrix. The smoothness of the Laplacian matrix is further integrated by parameterizing DE with a tiny INR. INRR improves the generalization of INR in signal representation by perfectly integrating the signal's self-similarity with the smoothness of the Laplacian matrix. Through well-designed numerical experiments, the paper also reveals a series of properties derived from INRR, including momentum methods like convergence trajectory and multi-scale similarity. Moreover, the proposed method could improve the performance of other signal representation methods.
翻译:自我正则化隐式神经表示
本文提出一种称为隐式神经表示正则化器(INRR)的正则化器,以提高隐式神经表示 (INR) 的泛化能力。INR 是一种完全连接的网络,可以表示具有细节的信号,而不受网格分辨率的限制。然而,它的泛化能力有待改进,尤其是在非均匀采样的数据上。所提出的INRR基于学习的狄利克雷能量(DE),该能量度量矩阵中行/列之间的相似性。通过使用微小的INR来参数化DE,进一步将拉普拉斯矩阵的平滑性集成进去。INRR通过将信号的自相似性与拉普拉斯矩阵的平滑性完美集成,提高了INR在信号表示中的泛化能力。通过经过精心设计的数值实验,本文还揭示了一系列从INRR中推导出的性质,包括收敛轨迹和多尺度相似性等动量方法。此外,所提出的方法可以提高其他信号表示方法的性能。