Performance optimization of deep learning models is conducted either manually or through automatic architecture search, or a combination of both. On the other hand, their performance strongly depends on the target hardware and how successfully the models were trained. We propose to use a multi-dimensional Pareto frontier to re-define the efficiency measure of candidate deep learning models, where several variables such as training cost, inference latency, and accuracy play a relative role in defining a dominant model. Furthermore, a random version of the multi-dimensional Pareto frontier is introduced to mitigate the uncertainty of accuracy, latency, and throughput of deep learning models in different experimental setups. These two complementary methods can be combined to perform objective benchmarking of deep learning models. Our proposed method is applied to a wide range of deep image classification models trained on ImageNet data. Our method combines competing variables with stochastic nature in a single relative efficiency measure. This allows ranking deep learning models that run efficiently on different hardware, and combining inference efficiency with training efficiency objectively.
翻译:深层次学习模型的性能优化是通过人工操作或通过自动结构搜索或两者结合进行的。 另一方面,它们的性能在很大程度上取决于目标硬件和模型培训的成功程度。 我们提议使用多维的Pareto边框来重新确定候选深层次学习模型的效率度量,在深层次学习模型中,培训成本、推论延缓度和精度等若干变量在确定主导模型方面起着相对的作用。此外,引入多维Pareto边框随机版本,以降低不同实验设置中深层次学习模型的准确性、耐久度和吞吐量的不确定性。这两种互补方法可以结合起来,对深层次学习模型进行客观基准测定。我们提出的方法适用于在图像网络数据方面受过培训的范围广泛的深层图像分类模型。我们的方法将不同的变量与随机性结合到一个单一相对效率尺度中。这样可以将不同硬件高效运行的深层次学习模型排位,并将引用效率与培训效率客观地结合起来。