Much recent research has been dedicated to improving the efficiency of training and inference for image classification. This effort has commonly focused on explicitly improving theoretical efficiency, often measured as ImageNet validation accuracy per FLOP. These theoretical savings have, however, proven challenging to achieve in practice, particularly on high-performance training accelerators. In this work, we focus on improving the practical efficiency of the state-of-the-art EfficientNet models on a new class of accelerator, the Graphcore IPU. We do this by extending this family of models in the following ways: (i) generalising depthwise convolutions to group convolutions; (ii) adding proxy-normalized activations to match batch normalization performance with batch-independent statistics; (iii) reducing compute by lowering the training resolution and inexpensively fine-tuning at higher resolution. We find that these three methods improve the practical efficiency for both training and inference. Code available at https://github.com/graphcore/graphcore-research/tree/main/Making_EfficientNet_More_Efficient .
翻译:最近的许多研究都致力于提高培训效率和图像分类的推论效率,这一努力通常侧重于明确提高理论效率,通常以FLOP的图像网络验证精度来衡量。然而,事实证明,这些理论节约在实践中难以实现,特别是在高性能培训加速器方面。在这项工作中,我们侧重于提高最新高效网络模型在新型加速器(Greacore 议会联盟)上的实际效率。我们这样做的方式是,以下列方式扩大这一系列模型:(一) 广泛推广深度变异到集团变异;(二) 增加代理规范化激活,使批量正常化业绩与批量独立统计数据相匹配;(三) 通过降低培训决议和高分辨率的低成本微调,减少计算。我们发现这三种方法提高了培训和引证的实际效率。我们可在https://github.com/graphiccore-research/main/main/making_EfficentNet_More_Effifificent查阅代码。