In this paper, we provide a deep dive into the deployment of inference accelerators at Facebook. Many of our ML workloads have unique characteristics, such as sparse memory accesses, large model sizes, as well as high compute, memory and network bandwidth requirements. We co-designed a high-performance, energy-efficient inference accelerator platform based on these requirements. We describe the inference accelerator platform ecosystem we developed and deployed at Facebook: both hardware, through Open Compute Platform (OCP), and software framework and tooling, through Pytorch/Caffe2/Glow. A characteristic of this ecosystem from the start is its openness to enable a variety of AI accelerators from different vendors. This platform, with six low-power accelerator cards alongside a single-socket host CPU, allows us to serve models of high complexity that cannot be easily or efficiently run on CPUs. We describe various performance optimizations, at both platform and accelerator level, which enables this platform to serve production traffic at Facebook. We also share deployment challenges, lessons learned during performance optimization, as well as provide guidance for future inference hardware co-design.
翻译:在本文中,我们为在Facebook上部署推算加速器提供了深潜。 我们的许多 ML 工作量具有独特的特点, 如记忆存取量稀少、模型大小大以及计算、记忆和网络带宽要求高。 我们根据这些要求共同设计了一个高性能、节能的推算加速器平台。 我们描述了我们在Facebook上开发和部署的推论加速器生态系统: 通过开放计算平台(OCP)开发的硬件,以及通过Pytorch/Caffe2/Glow开发的软件框架和工具。 这种生态系统的特征从一开始就是开放性,让不同供应商能够使用各种AI加速器。 这个平台有6个低功率的加速器卡,以及一个单张主机主机的CPU, 允许我们提供无法在CPU上轻易或高效运行的高复杂性模型。 我们描述了平台和加速器级别上的各种性能优化,使这个平台能够为Facebook上的生产流量服务。 我们还分享了部署挑战,作为未来最佳操作过程中学到的硬件。