We show that typical implicit regularization assumptions for deep neural networks (for regression) do not hold for coordinate-MLPs, a family of MLPs that are now ubiquitous in computer vision for representing high-frequency signals. Lack of such implicit bias disrupts smooth interpolations between training samples, and hampers generalizing across signal regions with different spectra. We investigate this behavior through a Fourier lens and uncover that as the bandwidth of a coordinate-MLP is enhanced, lower frequencies tend to get suppressed unless a suitable prior is provided explicitly. Based on these insights, we propose a simple regularization technique that can mitigate the above problem, which can be incorporated into existing networks without any architectural modifications.
翻译:我们发现,对深神经网络(用于回归)的典型隐含的正规化假设并不适用于协调-MLP, MLP是目前计算机视野中代表高频信号的 MLP的大家庭。 缺乏这种隐含的偏见会扰乱培训样本之间的顺利互换,并阻碍不同光谱的信号区域。 我们用Fourier透镜来调查这种行为,发现随着协调-MLP的带宽增强,低频率往往会受到抑制,除非明确事先提供合适的数据。 基于这些见解,我们建议了一种简单的正规化技术,可以缓解上述问题,这种技术可以在不作任何建筑修改的情况下纳入现有网络。