Recent work in Deep Learning has re-imagined the representation of data as functions mapping from a coordinate space to an underlying continuous signal. When such functions are approximated by neural networks this introduces a compelling alternative to the more common multi-dimensional array representation. Recent work on such Implicit Neural Representations (INRs) has shown that - following careful architecture search - INRs can outperform established compression methods such as JPEG (e.g. Dupont et al., 2021). In this paper, we propose crucial steps towards making such ideas scalable: Firstly, we employ state-of-the-art network sparsification techniques to drastically improve compression. Secondly, introduce the first method allowing for sparsification to be employed in the inner-loop of commonly used Meta-Learning algorithms, drastically improving both compression and the computational cost of learning INRs. The generality of this formalism allows us to present results on diverse data modalities such as images, manifolds, signed distance functions, 3D shapes and scenes, several of which establish new state-of-the-art results.
翻译:深层学习最近的工作重新想象了数据作为功能的表达方式,从坐标空间向一个基本的连续信号进行绘图。当这些功能被神经网络所近似时,这为更常见的多维阵列表示提供了令人信服的替代方法。最近关于这种隐形神经表征(INRs)的工作表明,在仔细的建筑搜索之后,IRS可以优于JPEG(例如Dupont等人,2021年)等既定压缩方法。在本文件中,我们提出了使这种想法可变化的关键步骤:首先,我们采用最先进的网络封闭技术来大幅度地改进压缩。第二,采用第一种方法,允许在常用的元学习算法的内环中采用吸附法,大幅度地改进压缩和计算成本。这种形式主义的笼统性使我们能够提出各种数据模式的结果,例如图像、元件、签名的远程功能、3D形状和场景,其中若干方法确立了新的状态结果。