We study the problem of efficient generative inference for Transformer models, in one of its most challenging settings: large deep models, with tight latency targets and long sequence lengths. Better understanding of the engineering tradeoffs for inference for large Transformer-based models is important as use cases of these models are growing rapidly throughout application areas. We develop a simple analytical model for inference efficiency to select the best multi-dimensional partitioning techniques optimized for TPU v4 slices based on the application requirements. We combine these with a suite of low-level optimizations to achieve a new Pareto frontier on the latency and model FLOPS utilization (MFU) tradeoffs on 500B+ parameter models that outperforms the FasterTransformer suite of benchmarks. We further show that with appropriate partitioning, the lower memory requirements of multiquery attention (i.e. multiple query heads share single key/value head) enables scaling up to 32x larger context lengths. Finally, we achieve a low-batch-size latency of 29ms per token during generation (using int8 weight quantization) and a 76% MFU during large-batch-size processing of input tokens, while supporting a long 2048-token context length on the PaLM 540B parameter model.
翻译:我们在其最具挑战性的环境中研究变异器模型的有效基因化推断问题:大型深层模型,具有紧固的潜伏指标和长序长度。更好地理解大型变异器模型的推导的工程权衡十分重要,因为这些模型的使用案例在整个应用区迅速增长。我们开发了一个简单的分析模型,用于推断效率,以根据应用要求选择对TPU v4切片最佳优化的多维分割技术。我们将这些模型与一套低级优化组合结合起来,以在延缩和FLOPS模型利用方面实现一个新的Pareto前沿。更好地理解500B+参数模型的工程权衡,这些模型比快速变异成套基准要重要。我们进一步表明,如果进行适当的分割,多角度关注的内存要求较低(即多个查询头共享单一的钥匙/价值头),就可以在应用要求的基础上,将宽度扩大至32x大环境长度。最后,我们实现了低级的低级拉差(使用内积8号模型)和FLOPS(MUU)模型利用新的500B参数模型,从而将支持5-40级长期处理。我们进一步展示了76-MFAL的宏。