Semi-supervised learning (SSL) promises gains in accuracy compared to training classifiers on small labeled datasets by also training on many unlabeled images. In realistic applications like medical imaging, unlabeled sets will be collected for expediency and thus uncurated: possibly different from the labeled set in represented classes or class frequencies. Unfortunately, modern deep SSL often makes accuracy worse when given uncurated unlabeled sets. Recent remedies suggest filtering approaches that detect out-of-distribution unlabeled examples and then discard or downweight them. Instead, we view all unlabeled examples as potentially helpful. We introduce a procedure called Fix-A-Step that can improve heldout accuracy of common deep SSL methods despite lack of curation. The key innovations are augmentations of the labeled set inspired by all unlabeled data and a modification of gradient descent updates to prevent following the multi-task SSL loss from hurting labeled-set accuracy. Though our method is simpler than alternatives, we show consistent accuracy gains on CIFAR-10 and CIFAR-100 benchmarks across all tested levels of artificial contamination for the unlabeled sets. We further suggest a real medical benchmark for SSL: recognizing the view type of ultrasound images of the heart. Our method can learn from 353,500 truly uncurated unlabeled images to deliver gains that generalize across hospitals.
翻译:半监督的学习(SSL) 与培训分类人员在标签型小数据集上的培训相比,在准确性方面会有所改进,因为许多未贴标签的图像也会得到培训。在医疗成像等现实应用中,将收集未贴标签的数据集,以方便地进行,从而不精确地收集:可能不同于有代表性的班级或班级频率的标签设置。 不幸的是,现代深层次的SSL通常会因为未贴标签的数据集而使准确性更差。最近的补救措施建议过滤方法,发现未贴标签的样本,然后丢弃或贬低它们。相反,我们认为所有未贴标签的示例都可能有所帮助。我们引入了一个程序,即Cif-A-Step(Six-Step),这个程序可以提高共同深层次的SSL方法的准确性。关键创新是由所有未贴标签的数据或类的频率所激发的标签制成的标签型数据集的增强。我们从不贴标签型模型到不贴标签型的模型,我们进一步建议一个真实的SLFSild 。