In response to innovations in machine learning (ML) models, production workloads changed radically and rapidly. TPU v4 is the fifth Google domain specific architecture (DSA) and its third supercomputer for such ML models. Optical circuit switches (OCSes) dynamically reconfigure its interconnect topology to improve scale, availability, utilization, modularity, deployment, security, power, and performance; users can pick a twisted 3D torus topology if desired. Much cheaper, lower power, and faster than Infiniband, OCSes and underlying optical components are <5% of system cost and <3% of system power. Each TPU v4 includes SparseCores, dataflow processors that accelerate models that rely on embeddings by 5x-7x yet use only 5% of die area and power. Deployed since 2020, TPU v4 outperforms TPU v3 by 2.1x and improves performance/Watt by 2.7x. The TPU v4 supercomputer is 4x larger at 4096 chips and thus ~10x faster overall, which along with OCS flexibility helps large language models. For similar sized systems, it is ~4.3x-4.5x faster than the Graphcore IPU Bow and is 1.2x-1.7x faster and uses 1.3x-1.9x less power than the Nvidia A100. TPU v4s inside the energy-optimized warehouse scale computers of Google Cloud use ~3x less energy and produce ~20x less CO2e than contemporary DSAs in a typical on-premise data center.
翻译:响应机器学习模型创新的变化,生产工作负载发生了根本性和迅速的变化。TPU v4是谷歌的第五个领域专用架构(DSA),也是第三个用于这种ML模型的超级计算机。光学电路开关(OCSes)动态地重新配置其互连拓扑,以提高规模、可用性、利用率、模块化、部署、安全性、功率和性能;用户可以选择扭曲的3D环形拓扑图。与Infiniband相比,OCSes和底层光学组件的成本和功率均低于系统成本的5%和系统功率的3%。每个TPU v4都包括SparseCores,这是加速依赖于嵌入式的模型的数据流处理器,可以使嵌入式提高5x-7x,但仅使用5%的晶片面积和功率。自2020年以来部署以来,TPU v4的性能比TPU v3高2.1倍,并将性能/瓦提高了2.7倍。TPU v4超级计算机的规模扩大了4倍,可以使用4096个芯片,因此总体速度提高了约10倍,这与OCS的灵活性有助于大型语言模型。对于类似大小的系统,它的速度比Graphcore IPU Bow快约4.3倍至4.5倍,比Nvidia A100快1.2倍至1.7倍,并使用1.3倍至1.9倍的功率更少。Google Cloud优化能源的仓库级计算机中的TPU v4使用的能源约为同期DSA在典型的本地数据中心中使用的能源的三分之一,产生的CO2e约为后者的20倍。