Propelled by new designs that permit to circumvent the spectral bias, implicit neural representations (INRs) have recently emerged as a promising alternative to classical discretized representations of signals. Nevertheless, despite their practical success, we still lack a proper theoretical characterization of how INRs represent signals. In this work, we aim to fill this gap, and we propose a novel unified perspective to theoretically analyse INRs. Leveraging results from harmonic analysis and deep learning theory, we show that most INR families are analogous to structured signal dictionaries whose atoms are integer harmonics of the set of initial mapping frequencies. This structure allows INRs to express signals with an exponentially increasing frequency support using a number of parameters that only grows linearly with depth. Afterwards, we explore the inductive bias of INRs exploiting recent results about the empirical neural tangent kernel (NTK). Specifically, we show that the eigenfunctions of the NTK can be seen as dictionary atoms whose inner product with the target signal determines the final performance of their reconstruction. In this regard, we reveal that meta-learning the initialization has a reshaping effect of the NTK analogous to dictionary learning, building dictionary atoms as a combination of the examples seen during meta-training. Our results permit to design and tune novel INR architectures, but can also be of interest for the wider deep learning theory community.
翻译:新的设计使人们能够绕过光谱偏差,隐性神经表示(INRs)最近成为传统离散信号表示的一种很有希望的替代方法。然而,尽管我们取得了实际的成功,我们仍然缺乏关于IRS代表信号的正确理论特征。在这项工作中,我们的目标是填补这一空白,我们提出了从理论角度分析IRS的新的统一观点。利用和谐分析和深层次学习理论得出的结果,我们表明,NTK的多数家庭类似于结构化信号词典,其原子是一组初始绘图频率的整齐协调。这一结构使得IRS能够使用一些参数迅速增加频率的支持来表达信号,这些参数只是线性地增长。随后,我们探索IRRs利用最近经验性神经内核内核(NTK)结果的诱导偏差。具体地说,我们显示NTK的功能可以被视为词典,其内部产品与目标信号决定着其最终重建的性能。在这方面,我们揭示了元化的频率支持以快速增长的频率显示,使用一些参数,而只是直线性地增长的参数。随后,我们探索了IRRBRBoral的模型,在创新的模型中学习IBRBRBRB的模型中也看到了我们学习结果。