Collecting large-scale medical datasets with fine-grained annotations is time-consuming and requires experts. For this reason, weakly supervised learning aims at optimising machine learning models using weaker forms of annotations, such as scribbles, which are easier and faster to collect. Unfortunately, training with weak labels is challenging and needs regularisation. Herein, we introduce a novel self-supervised multi-scale consistency loss, which, coupled with an attention mechanism, encourages the segmentor to learn multi-scale relationships between objects and improves performance. We show state-of-the-art performance on several medical and non-medical datasets. The code used for the experiments is available at https://vios-s.github.io/multiscale-pyag.
翻译:由于这个原因,监督不力的学习旨在利用较弱的注释形式优化机器学习模式,例如较容易更快地收集的拼图。不幸的是,使用薄弱标签的培训具有挑战性,需要规范化。在这里,我们引入了一种新的自我监督的多尺度一致性损失,加上关注机制,鼓励分块人学习物体之间的多尺度关系,并改进性能。我们在若干医学和非医学数据集中展示了最先进的表现。用于实验的代码可在https://vios-s.github.io/multiscale-pyag上查阅。