Neural implicit representations have shown substantial improvements in efficiently storing 3D data, when compared to conventional formats. However, the focus of existing work has mainly been on storage and subsequent reconstruction. In this work, we show that training neural representations for reconstruction tasks alongside conventional tasks can produce more general encodings that admit equal quality reconstructions to single task training, whilst improving results on conventional tasks when compared to single task encodings. We reformulate the semantic segmentation task, creating a more representative task for implicit representation contexts, and through multi-task experiments on reconstruction, classification, and segmentation, show our approach learns feature rich encodings that admit equal performance for each task.
翻译:与常规格式相比,在有效储存3D数据方面,神经隐含的表示方式显示出显著的改进;然而,现有工作的重点主要是储存和随后的重建;在这项工作中,我们表明,为重建任务和常规任务一起培训神经表示方式能够产生更普遍的编码,允许将同等质量的重建纳入单一任务培训,同时与单一任务编码相比,改进常规任务的结果;我们重新配置语义分割任务,为隐含的表示方式创造更具代表性的任务,并通过关于重建、分类和分化的多任务试验,表明我们的做法可以发现,丰富的编码方式承认每项任务具有同等业绩。