Deep Learning (DL) models have achieved superior performance in many application domains, including vision, language, medical, commercial ads, entertainment, etc. With the fast development, both DL applications and the underlying serving hardware have demonstrated strong scaling trends, i.e., Model Scaling and Compute Scaling, for example, the recent pre-trained model with hundreds of billions of parameters with ~TB level memory consumption, as well as the newest GPU accelerators providing hundreds of TFLOPS. With both scaling trends, new problems and challenges emerge in DL inference serving systems, which gradually trends towards Large-scale Deep learning Serving systems (LDS). This survey aims to summarize and categorize the emerging challenges and optimization opportunities for large-scale deep learning serving systems. By providing a novel taxonomy, summarizing the computing paradigms, and elaborating the recent technique advances, we hope that this survey could shed light on new optimization perspectives and motivate novel works in large-scale deep learning system optimization.
翻译:深学习模式在许多应用领域取得了优异的绩效,包括愿景、语言、医疗、商业广告、娱乐等。 随着快速开发,DL应用程序和基本服务硬件都显示出强劲的缩放趋势,即模型缩放和计算缩放,例如,最近经过预先培训的模型,有数千亿项参数,含TTTB级记忆消耗,以及最新的GPU加速器,提供了数百个TFLOPS。随着比例趋势的扩大,DL推论服务系统中出现了新的问题和挑战,这些系统逐渐走向大规模深层学习服务系统(LDS)的发展趋势。这项调查旨在总结和分类大型深层学习服务系统新出现的挑战和最佳机会。通过提供新的分类、总结计算模式和阐述最近的技术进步,我们希望这项调查能够揭示新的优化观点,并激励大规模深层学习系统优化的新工作。