Ensemble methods have been widely used to improve the performance of machine learning methods in terms of generalization and uncertainty calibration, while they struggle to use in deep learning systems, as training an ensemble of deep neural networks (DNNs) and then deploying them for online prediction incur an extremely higher computational overhead of model training and test-time predictions. Recently, several advanced techniques, such as fast geometric ensembling (FGE) and snapshot ensemble, have been proposed. These methods can train the model ensembles in the same time as a single model, thus getting around the hurdle of training time. However, their overhead of model recording and test-time computations remains much higher than their single model based counterparts. Here we propose a parsimonious FGE (PFGE) that employs a lightweight ensemble of higher-performing DNNs generated by several successively-performed procedures of stochastic weight averaging. Experimental results across different advanced DNN architectures on different datasets, namely CIFAR-$\{$10,100$\}$ and Imagenet, demonstrate its performance. Results show that, compared with state-of-the-art methods, PFGE achieves better generalization performance and satisfactory calibration capability, while the overhead of model recording and test-time predictions is significantly reduced.
翻译:综合方法被广泛用于提高机器学习方法的通用性和不确定性校准的性能,同时在深层学习系统中努力使用这些方法,因为培训深神经网络(DNNs)的组合体,然后在网上进行预测时采用这些方法,在模型培训和测试时的预测中,计算间接费用极高;最近,提出了若干先进技术,例如快速几何组合(FGE)和快速组合组合组合(Pastrophy Commission)等,这些方法可以在同一个模型同时训练模型组装,从而克服培训时间的障碍;然而,其模型记录和测试时间计算的管理费用仍然远远高于其单一模型的对口单位。在这里,我们建议采用一个精准的FGE(PGE)计算模型,采用一些连续不断完善的均匀重量程序生成的较优秀的高级数字网。这些方法可以同时在不同的数据集(即CAR-$10100美元和图像网)上对模型组进行实验,表明其模型和测试时间计算能力仍然大大下降,同时,结果显示其业绩与一般记录和测试-记录能力相比较相比,结果显示,成绩的成绩与令人满意。