In this paper, we study the performance invariance of convolutional neural networks when confronted with variable image sizes in the context of a more "wild steganalysis". First, we propose two algorithms and definitions for a fine experimental protocol with datasets owning "similar difficulty" and "similar security". The "smart crop 2" algorithm allows the introduction of the Nearly Nested Image Datasets (NNID) that ensure "a similar difficulty" between various datasets, and a dichotomous research algorithm allows a "similar security". Second, we show that invariance does not exist in state-of-the-art architectures. We also exhibit a difference in behavior depending on whether we test on images larger or smaller than the training images. Finally, based on the experiments, we propose to use the dilated convolution which leads to an improvement of a state-of-the-art architecture.
翻译:在本文中,我们研究了在更“不完善的系统分析”的背景下面对变异图像大小时,神经神经网络的性能变化。 首先,我们提出两种算法和定义,用于使用拥有“相似困难”和“类似安全”的数据集的精细实验协议。 “智能作物2”算法允许引入近内层图像数据集(NNID),以确保各数据集之间的“相似困难 ”, 而二进制研究算法允许“相似的安全 ” 。 其次,我们表明在最先进的结构中不存在变异。我们还根据我们是否测试比培训图像大或小的图像,在行为上表现出差异。 最后,根据实验,我们提议使用变相变异的算法,从而改进最先进的结构。