Even though deep neural networks have shown tremendous success in countless applications, explaining model behaviour or predictions is an open research problem. In this paper, we address this issue by employing a simple yet effective method by analysing the learning dynamics of deep neural networks in semantic segmentation tasks. Specifically, we visualize the learning behaviour during training by tracking how often samples are learned and forgotten in subsequent training epochs. This further allows us to derive important information about the proximity to the class decision boundary and identify regions that pose a particular challenge to the model. Inspired by this phenomenon, we present a novel segmentation method that actively uses this information to alter the data representation within the model by increasing the variety of difficult regions. Finally, we show that our method consistently reduces the amount of regions that are forgotten frequently. We further evaluate our method in light of the segmentation performance.
翻译:尽管深层神经网络在无数应用中取得了巨大成功,但解释模型行为或预测是一个公开的研究问题。在本文件中,我们通过分析深神经网络在语义分割任务中的学习动态,以简单而有效的方法解决这一问题。具体地说,我们通过跟踪样本在随后的培训时代的学习和遗忘频率,来想象培训过程中的学习行为。这进一步使我们能够获得关于接近阶级决定边界的重要信息,并查明对模型构成特别挑战的区域。受这一现象的启发,我们提出了一个新颖的分化方法,积极利用这一信息来改变模型中的数据代表面,增加困难区域的多样性。最后,我们表明我们的方法一贯减少经常被遗忘的区域的数量。我们进一步根据分化的性能评估我们的方法。