Training a deep neural model for semantic segmentation requires collecting a large amount of pixel-level labeled data. To alleviate the data scarcity problem presented in the real world, one could utilize synthetic data whose label is easy to obtain. Previous work has shown that the performance of a semantic segmentation model can be improved by training jointly with real and synthetic examples with a proper weighting on the synthetic data. Such weighting was learned by a heuristic to maximize the similarity between synthetic and real examples. In our work, we instead learn a pixel-level weighting of the synthetic data by meta-learning, i.e., the learning of weighting should only be minimizing the loss on the target task. We achieve this by gradient-on-gradient technique to propagate the target loss back into the parameters of the weighting model. The experiments show that our method with only one single meta module can outperform a complicated combination of an adversarial feature alignment, a reconstruction loss, plus a hierarchical heuristic weighting at pixel, region and image levels.
翻译:培养精密的语义分解神经模型需要收集大量的像素级标签数据。 为了缓解现实世界中出现的数据稀缺问题,人们可以使用贴标签容易获得的合成数据。 先前的工作表明,通过与真实和合成实例相结合的培训,对合成数据进行适当加权,可以改进语义分解模型的性能。 这种权重是用超常学学学的,以尽量扩大合成和真实实例之间的相似性。 在我们的工作中,我们通过元学习来学习合成数据的像素级权重,也就是说,对加权的学习只能最大限度地减少目标任务的损失。我们通过梯度调整技术将目标损失传播到加权模型的参数中。实验表明,我们只有一个元模块的方法可以超越对抗性特征对齐、重建损失的复杂组合,加上在像素、区域和图像层面的等级超值权重。