We study the effect of adversarial perturbations of images on the estimates of disparity by deep learning models trained for stereo. We show that imperceptible additive perturbations can significantly alter the disparity map, and correspondingly the perceived geometry of the scene. These perturbations not only affect the specific model they are crafted for, but transfer to models with different architecture, trained with different loss functions. We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust, without sacrificing overall accuracy of the model. This is unlike what has been observed in image classification, where adding the perturbed images to the training set makes the model less vulnerable to adversarial perturbations, but to the detriment of overall accuracy. We test our method using the most recent stereo networks and evaluate their performance on public benchmark datasets.
翻译:我们通过为立体而培训的深层学习模型,对图像的对抗性扰动对差异估计的影响进行研究。我们发现,不易察觉的添加扰动可显著改变差异图,并相应地改变场景的想象几何特征。这些扰动不仅影响它们所设计的具体模型,而且将它们转移到具有不同结构、受过不同损失功能培训的模型中。我们显示,在用于对抗性数据增强时,我们的扰动导致经过培训的模型更加稳健,同时不牺牲模型的总体准确性。这与在图像分类中观察到的情况不同,在成套培训中添加被扰动的图像使模型不易遭受对抗性扰动,而且不利于总体准确性。我们用最新的立体网络测试我们的方法,并评估其在公共基准数据集上的性能。