The rapidly-changing ML model landscape presents a unique opportunity for building hardware accelerators optimized for specific datacenter-scale workloads. We propose Full-stack Accelerator Search Technique (FAST), a hardware accelerator search framework that defines a broad optimization environment covering key design decisions within the hardware-software stack, including hardware datapath, software scheduling, and compiler passes such as operation fusion and tensor padding. Although FAST can be used on any number and type of deep learning workload, in this paper we focus on optimizing for a single or small set of vision models, resulting in significantly faster and more power-efficient designs relative to a general purpose ML accelerator. When evaluated on EfficientNet, ResNet50v2, and OCR inference performance relative to a TPU-v3, designs generated by FAST optimized for single workloads can improve Perf/TDP (peak power) by over 6x in the best case and 4x on average. On a limited workload subset, FAST improves Perf/TDP 2.85x on average, with a reduction to 2.35x for a single design optimized over the set of workloads. In addition, we demonstrate a potential 1.8x speedup opportunity for TPU-v3 with improved scheduling.
翻译:快速变化的 ML 模型景观为建立硬件加速器提供了独特的机会,为特定数据中心工作量优化优化了硬件加速器,我们提议建立一个硬件加速器搜索加速器搜索加速器(FAST),这是一个硬件加速器搜索框架,它界定了广泛的优化环境,涵盖硬件软件堆藏中的关键设计决定,包括硬件数据解析器、软件排期,以及操作聚合和散装等汇编者通行证。虽然FAST可以用于任何数量的深度学习工作量,但在本文件中,我们侧重于优化单一或小型的愿景模型,导致相对于通用 ML 加速器设计更快、更高效。在对高效网络、ResNet50v2 和OCR 推断性能进行评估时,与TPU-v3 相比,包括硬件数据解析、软件排期表和优化的单项工作量生成的设计可以提高Perf/TDP(峰值功率),在最佳案例和平均4x方面,在有限的工作量子组中,FAST改进了Perf/TDP 2.85x,从而大大加快了与一般用途的节能。