With the proliferation of deep learning methods, many computer vision problems which were considered academic are now viable in the consumer setting. One drawback of consumer applications is lossy compression, which is necessary from an engineering standpoint to efficiently and cheaply store and transmit user images. Despite this, there has been little study of the effect of compression on deep neural networks and benchmark datasets are often losslessly compressed or compressed at high quality. Here we present a unified study of the effects of JPEG compression on a range of common tasks and datasets. We show that there is a significant penalty on common performance metrics for high compression. We test several methods for mitigating this penalty, including a novel method based on artifact correction which requires no labels to train.
翻译:随着深层学习方法的泛滥,许多被视为学术性的计算机视觉问题现在在消费者环境中是可行的。消费者应用的一个缺点是损失压缩,从工程角度看,这是高效和廉价储存和传送用户图像所必需的。尽管如此,关于压缩对深层神经网络的影响的研究很少,基准数据集往往不失为压缩或高品质压缩。我们在这里对JPEG压缩对一系列共同任务和数据集的影响进行统一研究。我们表明,对高压缩的通用性能衡量标准有重大处罚。我们测试了减轻这一处罚的若干方法,包括一种基于文物校正的新方法,不需要标签来训练。