While neural fields have emerged as powerful representations of continuous data, there is a need for neural networks that can perform inference on such data without being sensitive to how the field is sampled, a property called (approximate) discretization invariance. We develop DI-Net, a framework for learning discretization invariant operators on neural fields of any type. Whereas current theoretical analyses of discretization invariant networks are restricted to the limit of infinite samples, our analysis does not require infinite samples and establishes upper bounds on the variation in DI-Net outputs given different finite discretizations. Our framework leads to a family of neural networks driven by numerical integration via quasi-Monte Carlo sampling with discretizations of low discrepancy. DI-Nets manifest desirable theoretical properties such as universal approximation of a large class of maps between $L^2$ functions, and gradients that are also discretization invariant. DI-Nets can also be seen as generalizations of many existing network families as they bridge discrete and continuous network classes, such as convolutional neural networks (CNNs) and neural operators respectively. Experimentally, DI-Nets derived from CNNs are able to classify and segment visual data represented by neural fields under various discretizations, and sometimes even generalize to new types of discretizations at test time.
翻译:虽然神经领域已成为持续数据的有力体现,但需要神经领域网络,这种网络可以对这些数据进行推断,而不必对取样方式的外观有敏感的认识,这是一种称为(近似)离散变化的属性。我们开发了DI-Net,这是一个在任何类型的神经领域学习离散的异变操作者的框架。虽然目前对离散的异变网络的理论分析限于无限样本的限度,但我们的分析并不需要无限的样本,而且根据不同程度的离散,对DI-Net输出的变异设定了上限。我们的框架导致由数字整合驱动的神经网络大家庭,通过半-蒙特卡洛取样以数字整合为动力,且差异小。DI-Net显示了理想的理论性质,例如美元2美元功能之间的大类地图普遍接近,以及离散的梯度也局限于无限的样本,但DI-Net也可被视为许多现有网络家庭在连接离散和连续的网络类别时的概括性,例如革命性神经网络网络(CNN)和神经网络运行者之间的数字融合网络,有时在普通的连续数据领域进行实验性、直径不固定的磁场下,有时由各种独立的数据库和直径直径的磁场进行。