Self-supervision has demonstrated to be an effective learning strategy when training target tasks on small annotated data-sets. While current research focuses on creating novel pretext tasks to learn meaningful and reusable representations for the target task, these efforts obtain marginal performance gains compared to fully-supervised learning. Meanwhile, little attention has been given to study the robustness of networks trained in a self-supervised manner. In this work, we demonstrate that networks trained via self-supervised learning have superior robustness and generalizability compared to fully-supervised learning in the context of medical imaging. Our experiments on pneumonia detection in X-rays and multi-organ segmentation in CT yield consistent results exposing the hidden benefits of self-supervision for learning robust feature representations.
翻译:在对小型附加说明的数据集进行目标任务培训时,自我监督已证明是一种有效的学习战略。虽然目前的研究侧重于创造新的借口任务,以学习有意义和可重复使用的目标任务说明,但与完全监督的学习相比,这些努力取得了微弱的业绩收益。与此同时,很少注意研究以自我监督方式培训的网络的健全性。在这项工作中,我们证明通过自我监督学习培训的网络与医学成像方面的完全监督的学习相比,具有超强的稳健性和一般性。我们在X射线中检测肺炎的实验和CT中多机分解的实验得出了一致的结果,暴露了自我监督的视野对学习强健特征说明的隐藏好处。