Much recent research has been dedicated to improving the efficiency of training and inference for image classification. This effort has commonly focused on explicitly improving theoretical efficiency, often measured as ImageNet validation accuracy per FLOP. These theoretical savings have, however, proven challenging to achieve in practice, particularly on high-performance training accelerators. In this work, we focus on improving the practical efficiency of the state-of-the-art EfficientNet models on a new class of accelerator, the Graphcore IPU. We do this by extending this family of models in the following ways: (i) generalising depthwise convolutions to group convolutions; (ii) adding proxy-normalized activations to match batch normalization performance with batch-independent statistics; (iii) reducing compute by lowering the training resolution and inexpensively fine-tuning at higher resolution. We find that these three methods improve the practical efficiency for both training and inference. Our code will be made available online.
翻译:最近的许多研究都致力于提高培训效率和图像分类的推论效率。这一努力通常侧重于明确提高理论效率,通常以FLOP的图像网络验证精度衡量。然而,事实证明,这些理论节约在实践中难以实现,特别是在高性能培训加速器方面。在这项工作中,我们侧重于提高最新高效网络模型在一个新的加速器类别即Greacore 议会联盟上的实际效率。我们通过以下方式扩大这一模型系列:(一) 将深度变异推广到集团变异;(二) 增加代理调整启动,使批量正常化业绩与批量独立统计相匹配;(三) 通过降低培训分辨率和高分辨率的低成本微调降低计算率。我们发现这三种方法提高了培训和推断的实际效率。我们的代码将在线公布。