Deep learning and hardware for it has garnered immense academic and industry interest in the past 5 years -- including almost 100 startups, more than $5B of VC investment -- and a re-relevance of the role of architecture. However, the state-of-art remains NVIDIA's TensorCore-based systems that provide i) top-of-line performance, ii) turnkey software stack, and iii) coverage across a wide-spectrum of DL network styles (DL-architecture in AI parlance). Other academic and industry efforts have included novel approaches like spatial dataflow, CGRAs, systolic arrays, blended FPGA LUTs with fixed function units and more. These have all necessitated their own innovations in architecture, compiler, and software stack integration. However, none of these have yet satisfied all the 3 metrics that NVIDIA's TensorCore and software stack provides, and generally seem to perform worse. In this paper, we systematically investigate the behavior of DL workloads and imputed needs on hardware/compiler/software. We show that SIMD/short-vector, caching, and synchronization in a fairly well-understood multicore chip organization we call UPCYCLE can achieve day-zero software maturity, and provide big integer factor speedups over the state-of-art NVIDIA solutions. Compared to an A100, UPCYCLE at small-batch size is geo-mean 3.8X faster for inference, geo-mean 4.2X faster at training, while consuming only half the power. Second, the UPCYCLE architecture requires no new compiler or software stack innovation. Third, it provides full DL-architecture coverage, and can be instantiated to provide training-optimized, inference-optimized, or balanced training and inference systems. Overall, this paper motivates the treatment of software maturity as a first class design constraint in developing new architectures for DL. This is achieved by revisiting well understood ideas, upcycling them for future DL architectures...
翻译:在过去五年里,它的深层次学习和硬件引起了巨大的学术和行业兴趣 -- -- 包括近100个启动项目,超过VC投资的5B美元 -- -- 以及建筑作用的再相关性。然而,最先进的系统仍然是NVIDIA的TensorCore系统,提供i) 顶级性能,(二) 统包软件堆,(三) 覆盖DL网络风格的广度(AI 语句中的DL级结构) 。其他学术和行业的努力包括了空间数据流、CGRAs、Systecl 阵列、配有固定功能单位的FPGA LUTs的混合设计LUT。所有这些都需要他们自己在架构、编译和软件堆集方面进行创新,但这些系统还没有满足NVIDIA的3个标准。DSORLE和软件堆积可以首先提供纸质化,而且一般看来更差。在本文中,我们系统地调查DLLE的第二次工作量和内置的内值,在硬件/CUROUDA中,在直流/软体上, 直观上显示我们在一个稳定的组织中, 。