Recognizing the explosive increase in the use of AI-based applications, several industrial companies developed custom ASICs (e.g., Google TPU, IBM RaPiD, Intel NNP-I/NNP-T) and constructed a hyperscale cloud infrastructure with them. These ASICs perform operations of the inference or training process of AI models which are requested by users. Since the AI models have different data formats and types of operations, the ASICs need to support diverse data formats and various operation shapes. However, the previous ASIC solutions do not or less fulfill these requirements. To overcome these limitations, we first present an area-efficient multiplier, named all-in-one multiplier, that supports multiple bit-widths for both integer and floating point data types. Then, we build a MAC array equipped with these multipliers with multi-format support. In addition, the MAC array can be partitioned into multiple blocks that can be flexibly fused to support various DNN operation types. We evaluate the practical effectiveness of the proposed MAC array by making an accelerator out of it, named All-rounder. According to our evaluation, the proposed all-in-one multiplier occupies 1.49x smaller area compared to the baselines with dedicated multipliers for each data format. Then, we compare the performance and energy efficiency of the proposed All-rounder with three different accelerators showing consistent speedup and higher efficiency across various AI benchmarks from vision to LLM-based language tasks.
翻译:暂无翻译