The high efficiency of domain-specific hardware accelerators for machine learning (ML) has come from specialization, with the trade-off of less configurability/ flexibility. There is growing interest in developing flexible ML accelerators to make them future-proof to the rapid evolution of Deep Neural Networks (DNNs). However, the notion of accelerator flexibility has always been used in an informal manner, restricting computer architects from conducting systematic apples-to-apples design-space exploration (DSE) across trillions of choices. In this work, we formally define accelerator flexibility and show how it can be integrated for DSE. Specifically, we capture DNN accelerator flexibility across four axes: tiling, ordering, parallelization, and array shape. We categorize existing accelerators into 16 classes based on their axes of flexibility support, and define a precise quantification of the degree of flexibility of an accelerator across each axis. We leverage these to develop a novel flexibility-aware DSE framework. We demonstrate how this can be used to perform first-of-their-kind evaluations, including an isolation study to identify the individual impact of the flexibility axes. We demonstrate that adding flexibility features to a hypothetical DNN accelerator designed in 2014 improves runtime on future (i.e., present-day) DNNs by 11.8x geomean.
翻译:用于机器学习的特定域硬件加速器(ML)的高效来自专业化,其取舍取自不易配置/灵活性。人们越来越希望开发灵活的 ML 加速器,使其未来与深神经网络(DNN)的快速发展相抗衡。然而,加速器灵活性的概念一直以非正式方式使用,限制了计算机设计师对万亿个选择进行系统苹果到应用的设计-空间探索(DSE)。我们在此工作中正式界定加速器灵活性,并展示它如何融入DSE。具体地说,我们捕捉DNN 加速器在四个轴上的灵活性:平整、订购、平行化和阵列形状。我们根据现有加速器支持的轴将现有加速器分为16个班,并界定每个轴对加速器的灵活度程度进行精确的量化。我们利用这些工具来开发新的灵活度DSEE框架。我们展示了如何将DNNE加速器的灵活度纳入到2014年头几个轴上。我们展示了未来将用来进行一个孤立式的地轴上,包括一个我们用来展示2014年度度度度度度分析。