Semi-supervised learning has received a lot of recent attention as it alleviates the need for large amounts of labelled data which can often be expensive, requires expert knowledge and be time consuming to collect. Recent developments in deep semi-supervised classification have reached unprecedented performance and the gap between supervised and semi-supervised learning is ever-decreasing. This improvement in performance has been based on the inclusion of numerous technical tricks, strong augmentation techniques and costly optimisation schemes with multi-term loss functions. We propose a new framework, LaplaceNet, for deep semi-supervised classification that has a greatly reduced model complexity. We utilise a hybrid energy-neural network where graph based pseudo-labels, generated by minimising the graphical Laplacian, are used to iteratively improve a neural-network backbone. Our model outperforms state-of-the-art methods for deep semi-supervised classification, over several benchmark datasets. Furthermore, we consider the application of strong-augmentations to neural networks theoretically and justify the use of a multi-sampling approach for semi-supervised learning. We demonstrate, through rigorous experimentation, that a multi-sampling augmentation approach improves generalisation and reduces the sensitivity of the network to augmentation.
翻译:近些年来,半监督的学习得到了许多关注,因为它缓解了对大量标签数据的需求,这些数据往往费用昂贵,需要专家知识,而且需要花费大量时间收集。深半监督分类的最近发展达到了前所未有的程度,受监督和半监督的学习差距正在不断缩小。这种绩效的改进基于包括许多技术技巧、强力增强技术和具有多年度损失功能的昂贵优化计划。我们提议了一个新的框架,即LapelNet,用于深度半监督分类,其模型复杂性大大降低。我们使用一个混合能源神经网络,其中以图为基础的伪标签(由最小化图形拉普拉西人生成)用于迭代改善神经网络主干。我们的模型在深度半监督分类方面优于最新技术,超过了几个基准数据集。此外,我们考虑在理论上对神经网络应用强力增强,并有理由使用多抽样方法进行半监督性学习。我们通过严格的实验,展示了强化的网络扩增扩增度。