Committee-based models (ensembles or cascades) construct models by combining existing pre-trained ones. While ensembles and cascades are well-known techniques that were proposed before deep learning, they are not considered a core building block of deep model architectures and are rarely compared to in recent literature on developing efficient models. In this work, we go back to basics and conduct a comprehensive analysis of the efficiency of committee-based models. We find that even the most simplistic method for building committees from existing, independently trained networks can match or exceed the accuracy of state-of-the-art models while being drastically more efficient. These simple committee-based models also outperform sophisticated neural architecture search methods (e.g., BigNAS). These findings hold true for several tasks, including image classification, video classification, and semantic segmentation, and various architecture families, such as ViT, EfficientNet, ResNet, MobileNetV2, and X3D. For example, an EfficientNet cascade can achieve a 5.4x speedup over B7 and a ViT-based cascade can achieve a 2.3x speedup over ViT-L-384 while being equally accurate.
翻译:以委员会为基础的模型(元件或级联)建模模式(以综合现有的经过培训的模型或级联),综合现有的经过独立培训的网络中,即使是建筑委员会的最简单方法,也能够匹配或超过最新模型的准确性,同时又能大大提高效率。这些以委员会为基础的模型和级联虽然是在深层次学习之前提出的,但并不被视为深层次模型结构的核心构件,而且与最近关于开发高效模型的文献中很少加以比较。在这项工作中,我们回到基础,对基于委员会的模式的效率进行全面分析。我们发现,即使是现有独立培训的网络中,即使是建筑委员会的最简单的方法,也可以匹配或超过最新模型的准确性,同时效率更高。这些基于委员会的简单模型也超越了复杂的神经结构搜索方法(如BigNAS)。这些发现对若干任务来说是真实的,包括图像分类、视频分类和语义分解,以及各种建筑家庭,例如VIT、高效的Net、ResNet、MiveNet2和X3D。例如,高效的网络级联可以实现5.4x速度超过B7的速度,而基于VT的级联能达到2.3x速度超过VT-L-384,同时同样准确。