We introduce an efficient optimization-based meta-learning technique for learning large-scale implicit neural representations (INRs). Our main idea is designing an online selection of context points, which can significantly reduce memory requirements for meta-learning in any established setting. By doing so, we expect additional memory savings which allows longer per-signal adaptation horizons (at a given memory budget), leading to better meta-initializations by reducing myopia and, more crucially, enabling learning on high-dimensional signals. To implement such context pruning, our technical novelty is three-fold. First, we propose a selection scheme that adaptively chooses a subset at each adaptation step based on the predictive error, leading to the modeling of the global structure of the signal in early steps and enabling the later steps to capture its high-frequency details. Second, we counteract any possible information loss from context pruning by minimizing the parameter distance to a bootstrapped target model trained on a full context set. Finally, we suggest using the full context set with a gradient scaling scheme at test-time. Our technique is model-agnostic, intuitive, and straightforward to implement, showing significant reconstruction improvements for a wide range of signals. Code is available at https://github.com/jihoontack/ECoP
翻译:我们引入了一种高效的基于优化的元学习技术,用于学习大规模隐含神经表征(INRs),我们的主要想法是设计一个在线选择上下文点,这可以大大减少在任何既定环境中进行元学习的记忆要求。通过这样做,我们预计会节省更多的记忆,从而(在一定的记忆预算中)能够延长每个信号的适应视野,从而通过减少近视和更为关键的高维信号学习,实现更好的元初始化。为了实施这种背景线条,我们的技术创新是三重。首先,我们提出了一个选择方案,根据预测错误,在每一个适应步骤上选择一个子子,从而导致早期建立信号全球结构的模型,并使以后的步骤能够捕捉到其高频细节。第二,我们通过将参数距离减少到在全局上训练的固态目标模型,来抵消环境上可能发生的信息损失。最后,我们建议使用在测试时使用带有梯度缩缩图的全局设置。我们的技术是模型化的,直观式的,直观式的、直观式的、可应用的AD/ADR的系统信号,以显示重大的重建。