Supervised training of optical flow predictors generally yields better accuracy than unsupervised training. However, the improved performance comes at an often high annotation cost. Semi-supervised training trades off accuracy against annotation cost. We use a simple yet effective semi-supervised training method to show that even a small fraction of labels can improve flow accuracy by a significant margin over unsupervised training. In addition, we propose active learning methods based on simple heuristics to further reduce the number of labels required to achieve the same target accuracy. Our experiments on both synthetic and real optical flow datasets show that our semi-supervised networks generally need around 50% of the labels to achieve close to full-label accuracy, and only around 20% with active learning on Sintel. We also analyze and show insights on the factors that may influence active learning performance. Code is available at https://github.com/duke-vision/optical-flow-active-learning-release.
翻译:对光学流量预测器的监督培训通常比未经监督的培训更准确。然而,改进的性能往往以很高的注解成本为代价。半监督的培训将准确性与注解成本作交换。我们使用简单而有效的半监督的培训方法来表明,即使是一小部分标签也能提高流量的准确性,比未经监督的培训有很大的幅度。此外,我们提议基于简单惯性的积极学习方法,以进一步减少实现同一目标准确性所需的标签数量。我们在合成和真实光学流量数据集方面的实验表明,我们的半监督网络通常需要大约50%的标签才能达到完全标签准确性,只有大约20%的标签在辛特尔积极学习。我们还分析和展示了可能影响积极学习绩效的因素。守则可在https://github.com/duke-vision/光学-flow-active-learning-release.