In medical image analysis, semi-supervised learning is an effective method to extract knowledge from a small amount of labeled data and a large amount of unlabeled data. This paper focuses on a popular pipeline known as self learning, and points out a weakness named lazy learning that refers to the difficulty for a model to learn from the pseudo labels generated by itself. To alleviate this issue, we propose ATSO, an asynchronous version of teacher-student optimization. ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset. We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings. With slight modification, ATSO transfers well to natural image segmentation for autonomous driving data.
翻译:在医学图像分析中,半监督学习是从少量贴标签数据和大量未贴标签数据中获取知识的有效方法。本文件侧重于被称为自我学习的流行管道,并指出一个称为懒惰学习的弱点,即模型难以从自己产生的假标签中学习。为了缓解这一问题,我们提议采用教师-学生优化的无同步版本ASTO,这是一个教师-学生优化的无标签版本ASTO。 ASO将未贴标签的数据分成两个子集,并使用一个子集对模型进行微调,更新另一个子集的标签。我们评估了ASTO的两个流行的医学图像分割数据集,并展示了它在各种半监督环境中的优异性表现。稍作修改后,ASTO将自然图像分割用于自动驾驶数据。