There is a growing demand for shifting the delivery of AI capability from data centers on the cloud to edge or end devices, exemplified by the fast emerging real-time AI-based apps running on smartphones, AR/VR devices, autonomous vehicles, and various IoT devices. The shift has however been seriously hampered by the large growing gap between DNN computing demands and the computing power on edge or end devices. This article presents the design of XGen, an optimizing framework for DNN designed to bridge the gap. XGen takes cross-cutting co-design as its first-order consideration. Its full-stack AI-oriented optimizations consist of a number of innovative optimizations at every layer of the DNN software stack, all designed in a cooperative manner. The unique technology makes XGen able to optimize various DNNs, including those with an extreme depth (e.g., BERT, GPT, other transformers), and generate code that runs several times faster than those from existing DNN frameworks, while delivering the same level of accuracy.
翻译:对将AI能力从云层上的数据中心转向边缘或末端装置的需求不断增加,例如快速出现的基于AI的实时软件在智能手机、AR/VR装置、自主车辆和各种IoT装置上运行,但DNN计算要求和边缘或末端装置的计算能力之间差距越来越大,严重阻碍了这一转变。本文章介绍了XGen的设计,这是DNN为缩小差距而设计的优化框架。XGen将交叉设计共同设计作为其第一级考虑。全式AI面向优化包括DNN软件堆的每一层都以合作方式设计的若干创新优化。独特的技术使XGen能够优化各种DNN,包括深度极深的DNN(例如,BERT,GPT,其他变异器),并生成数倍于现有DNN框架的代码,同时提供同样准确度的代码。