Committee-based models (ensembles or cascades) construct models by combining existing pre-trained ones. While ensembles and cascades are well-known techniques that were proposed before deep learning, they are not considered a core building block of deep model architectures and are rarely compared to in recent literature on developing efficient models. In this work, we go back to basics and conduct a comprehensive analysis of the efficiency of committee-based models. We find that even the most simplistic method for building committees from existing, independently pre-trained models can match or exceed the accuracy of state-of-the-art models while being drastically more efficient. These simple committee-based models also outperform sophisticated neural architecture search methods (e.g., BigNAS). These findings hold true for several tasks, including image classification, video classification, and semantic segmentation, and various architecture families, such as ViT, EfficientNet, ResNet, MobileNetV2, and X3D. Our results show that an EfficientNet cascade can achieve a 5.4x speedup over B7 and a ViT cascade can achieve a 2.3x speedup over ViT-L-384 while being equally accurate.
翻译:以委员会为基础的模型(元件或级联)通过合并现有的经过事先培训的模型来构建模型。虽然集合和级联是深层学习前提出的众所周知的技术,但它们不被视为深层模型结构的核心构件,也很少与最近关于开发高效模型的文献进行比较。在这项工作中,我们回到基础,对以委员会为基础的模型的效率进行全面分析。我们发现,即使是现有、独立、经过培训的模型中最简单的建筑委员会方法,也能够匹配或超过最先进的模型的准确性,同时又能大大提高效率。这些基于委员会的简单模型也超越了先进的神经结构搜索方法(例如,BigNAS )。这些发现对若干任务是真实的,包括图像分类、视频分类和语义分解,以及各种建筑家庭,如Vit、高效网络、ResNet、移动Net2和X3D。我们的结果显示,高效的网络级联能够实现5.4x速度超过B7, Vit级联能够实现2.3x速度超过VIT-L-384,同时同样准确。