Ensemble methods have been widely used to improve the performance of machine learning methods in terms of generalization and uncertainty calibration, while they struggle to use in deep learning systems, as training an ensemble of deep neural networks (DNNs) and then deploying them for online prediction incur an extremely higher computational overhead of model training and test-time predictions. Recently, several advanced techniques, such as fast geometric ensembling (FGE) and snapshot ensemble, have been proposed. These methods can train the model ensembles in the same time as a single model, thus getting around the hurdle of training time. However, their overhead of model recording and test-time computations remains much higher than their single model based counterparts. Here we propose a parsimonious FGE (PFGE) that employs a lightweight ensemble of higher-performing DNNs generated by several successively-performed procedures of stochastic weight averaging. Experimental results across different advanced DNN architectures on different datasets, namely CIFAR-$\{$10,100$\}$ and Imagenet, demonstrate its performance. Results show that, compared with state-of-the-art methods, PFGE achieves better generalization performance and satisfactory calibration capability, while the overhead of model recording and test-time predictions is significantly reduced. Our code is available at https://github.com/ZJLAB-AMMI/PFGE.
翻译:综合方法被广泛用于提高机器学习方法在一般化和不确定性校准方面的性能,同时在深层次学习系统中努力使用这些方法,因为培训深层神经网络(DNNs)的集合体,然后在网上进行预测时,采用这些方法的计算间接成本极高,模型培训和测试时间预测的计算间接成本极高。最近,提出了若干先进技术,例如快速几何组合(FGE)和快照组合。这些方法可以在同一个模型同时训练模型组装,从而绕过培训时间的障碍。然而,它们模型记录和测试时间计算的管理仍然远远高于其基于单一模型的对应方。在这里,我们建议采用一个精准的FGE(PGE),该模型使用由若干连续不断改进的数学比重程序生成的较轻的高级数字组合值。不同高级DNNE结构(即CAR-$10,100美元和图像域网)的实验结果,表明模型和图像域网的运行情况比以单一模型为基础的模型化。结果显示,与我们通用的测试-记录能力相比,总的成绩是改进的成绩。