In this paper, a novel multi-head multi-layer perceptron (MLP) structure is presented for implicit neural representation (INR). Since conventional rectified linear unit (ReLU) networks are shown to exhibit spectral bias towards learning low-frequency features of the signal, we aim at mitigating this defect by taking advantage of the local structure of the signals. To be more specific, an MLP is used to capture the global features of the underlying generator function of the desired signal. Then, several heads are utilized to reconstruct disjoint local features of the signal, and to reduce the computational complexity, sparse layers are deployed for attaching heads to the body. Through various experiments, we show that the proposed model does not suffer from the special bias of conventional ReLU networks and has superior generalization capabilities. Finally, simulation results confirm that the proposed multi-head structure outperforms existing INR methods with considerably less computational cost.
翻译:本文为隐性神经表示(INR)提出了一个新的多头多层多光谱结构。由于常规修正线性单元(RELU)网络显示在学习信号低频特性方面表现出光谱偏向,我们的目标是利用信号的当地结构来减轻这一缺陷。更具体地说,利用MLP来捕捉所希望信号基本发电机功能的全球特征。然后,利用几个头来重建信号的脱节地方特征,并减少计算复杂性,因此,为将头挂在身体上而部署了稀薄的层。我们通过各种实验,表明拟议的模型没有受到常规的RLU网络的特殊偏向,而且具有超强的通用能力。最后,模拟结果证实,拟议的多头结构在计算成本大大低于现有IR方法的情况下超越了现有的多头结构。