Current contrastive learning methods use random transformations sampled from a large list of transformations, with fixed hyperparameters, to learn invariance from an unannotated database. Following previous works that introduce a small amount of supervision, we propose a framework to find optimal transformations for contrastive learning using a differentiable transformation network. Our method increases performances at low annotated data regime both in supervision accuracy and in convergence speed. In contrast to previous work, no generative model is needed for transformation optimization. Transformed images keep relevant information to solve the supervised task, here classification. Experiments were performed on 34000 2D slices of brain Magnetic Resonance Images and 11200 chest X-ray images. On both datasets, with 10% of labeled data, our model achieves better performances than a fully supervised model with 100% labels.
翻译:目前对比式学习方法使用从大量变异清单中抽取的随机变异,使用固定的超参数,从一个没有附加说明的数据库中学习变异性。在以往引入少量监督的工程之后,我们提出了一个框架,以寻找利用不同变异网络进行对比性学习的最佳变异性。我们的方法提高了在监管准确性和趋同速度两方面的低附加说明数据系统的性能。与以往的工作不同,转换优化不需要任何基因化模型。变异图像保存相关信息以完成监管任务,在此分类。对34000 2D脑磁共振图像片和11200胸前X射线图像进行了实验。在两个数据集中,有10%的标签数据,我们的模型的性能优于100%标签的完全监督模型。