In response to innovations in machine learning (ML) models, production workloads changed radically and rapidly. TPU v4 is the fifth Google domain specific architecture (DSA) and its third supercomputer for such ML models. Optical circuit switches (OCSes) dynamically reconfigure its interconnect topology to improve scale, availability, utilization, modularity, deployment, security, power, and performance; users can pick a twisted 3D torus topology if desired. Much cheaper, lower power, and faster than Infiniband, OCSes and underlying optical components are <5% of system cost and <3% of system power. Each TPU v4 includes SparseCores, dataflow processors that accelerate models that rely on embeddings by 5x-7x yet use only 5% of die area and power. Deployed since 2020, TPU v4 outperforms TPU v3 by 2.1x and improves performance/Watt by 2.7x. The TPU v4 supercomputer is 4x larger at 4096 chips and thus ~10x faster overall, which along with OCS flexibility helps large language models. For similar sized systems, it is ~4.3x-4.5x faster than the Graphcore IPU Bow and is 1.2x-1.7x faster and uses 1.3x-1.9x less power than the Nvidia A100. TPU v4s inside the energy-optimized warehouse scale computers of Google Cloud use ~3x less energy and produce ~20x less CO2e than contemporary DSAs in a typical on-premise data center.
翻译:针对机器学习(ML)模型的创新及其生产工作负载的快速变化,谷歌推出了第五个领域特定架构(DSA)第三个超级计算机——TPU v4。其光学电路开关(OCSes)可以动态重新配置互连拓扑,从而提高规模、可用性、利用率、模块化、部署、安全、功率和性能;用户可以选择扭曲的3D环面拓扑。 OCSes和底层光学元件比Infiniband更便宜、功耗更低、速度更快,仅占系统成本的<5%和系统功率的<3%。每个TPU v4包括SparseCores,这些数据流处理器将仅使用5%的片区和电力即可将嵌入式模型加速5x-7x。TPU v4自2020年部署以来,性能较TPU v3提高2.1倍,并将性能/瓦的比值提高2.7倍。TPU v4超级计算机扩大了4倍,拥有4096个芯片,因此整体速度提高了约10倍,再加上OCS的灵活性,有助于大型语言模型。对于相似大小的系统,它比Graphcore IPU Bow快了约4.3x-4.5x,并且比Nvidia A100快了1.2x-1.7x,并且功耗更低1.3x-1.9x。 Google Cloud的能源优化的数据中心内的TPU v4使用的能量约为当今典型本地数据中心中的DSAs的3倍,并且产生的CO2e约为后者的20倍。