Transformers are at the core of modern AI nowadays. They rely heavily on matrix multiplication and require efficient acceleration due to their substantial memory and computational requirements. Quantization plays a vital role in reducing memory usage, and can be exploited for computations by designing reconfigurable architectures that enhance matrix multiplication by dynamically adjusting the precision. This paper proposes ADiP, a novel adaptive-precision systolic array architecture designed for efficient matrix multiplication acceleration.The proposed architecture consists of NxN adaptive-precision processing elements (PEs) and shared accumulators. ADiP supports multiple computation modes, including symmetric single-matrix multiplication as well as asymmetric multi-matrix multiplication with a shared input matrix, thereby improving data-reuse and PE utilization. In addition, ADiP maximizes the computational density by adapting to different precisions, such as 8bitx8bit, 8bitx4bit, and 8bitx2bit. Analytical models are developed for ADiP architecture, including latency and throughput for versatile architecture configurations. A comprehensive hardware design space exploration is demonstrated using 22nm commercial technology, achieving up to a 4x higher computational throughput. Furthermore, ADiP is evaluated on different transformer workloads from GPT-2 Medium, BERT Large, and BitNet-1.58B models, delivering latency improvement up to 53.6%, and energy improvement up to 24.4% for BitNet-1.58B MHA workloads. At a 64x64 size with 4096 PEs, ADiP achieves a peak throughput of 8.192 TOPS, 16.384 TOPS, and 32.768 TOPS for 8bitx8bit, 8bitx4bit, and 8bitx2bit operations, respectively.
翻译:暂无翻译