In this work, we leverage ensemble learning as a tool for the creation of faster, smaller, and more accurate deep learning models. We demonstrate that we can jointly optimize for accuracy, inference time, and the number of parameters by combining DNN classifiers. To achieve this, we combine multiple ensemble strategies: bagging, boosting, and an ordered chain of classifiers. To reduce the number of DNN ensemble evaluations during the search, we propose EARN, an evolutionary approach that optimizes the ensemble according to three objectives regarding the constraints specified by the user. We run EARN on 10 image classification datasets with an initial pool of 32 state-of-the-art DCNN on both CPU and GPU platforms, and we generate models with speedups up to $7.60\times$, reductions of parameters by $10\times$, or increases in accuracy up to $6.01\%$ regarding the best DNN in the pool. In addition, our method generates models that are $5.6\times$ faster than the state-of-the-art methods for automatic model generation.
翻译:在这项工作中,我们利用共同学习作为创造更快、更小、更准确的深层次学习模式的工具。我们通过将DNN分类器组合起来,证明我们可以共同优化精确度、推导时间和参数数量。为了实现这一点,我们结合了多种混合战略:包装、推动和定购分类器链。为了在搜索过程中减少DNN组合评价的数量,我们提议了EARN,这是一种进化方法,根据用户规定的三个限制目标优化组合。我们用10个图像分类数据集运行EARN,在CPU和GPU平台上初始集合32个最先进的DCNNN,我们生成模型,加速到7.60美元,将参数减少10美元,或者将最佳DNN的精确度提高至6.01美元。此外,我们的方法生成模型比自动模型生成的最先进的方法快5.6美元。