Convolutional networks are considered shift invariant, but it was demonstrated that their response may vary according to the exact location of the objects. In this paper we will demonstrate that most commonly investigated datasets have a bias, where objects are over-represented at the center of the image during training. This bias and the boundary condition of these networks can have a significant effect on the performance of these architectures and their accuracy drops significantly as an object approaches the boundary. We will also demonstrate how this effect can be mitigated with data augmentation techniques.
翻译:革命网络被视为变迁网络,但事实证明,它们的反应可能因物体的确切位置而异。在本文中,我们将证明,最经常调查的数据集存在偏差,在训练期间,物体在图像中心代表过多。这种偏差和这些网络的边界状况对这些结构的性能及其在物体接近边界时的精确度会显著下降产生重大影响。我们还将证明如何通过数据增强技术来减轻这种影响。