Transformer models gain popularity because of their superior inference accuracy and inference throughput. However, the transformer is computation-intensive, causing a long inference time. The existing works on transformer inference acceleration have limitations caused by either the modification of transformer architectures or the need of specialized hardware. In this paper, we identify the opportunities of using memoization to accelerate the self-attention mechanism in transformers without the above limitations. Built upon a unique observation that there is rich similarity in attention computation across inference sequences, we build a memoization database that leverages the emerging big memory system. We introduce a novel embedding technique to find semantically similar inputs to identify computation similarity. We also introduce a series of techniques such as memory mapping and selective memoization to avoid memory copy and unnecessary overhead. We enable 22% inference-latency reduction on average (up to 68%) with negligible loss in inference accuracy.
翻译:Transformer模型因其卓越的推理准确性和推理吞吐量而变得流行。然而,Transformer具有计算密集型,导致推理时间长的问题。目前已有的Transformer推理加速工作存在限制,包括需要修改Transformer架构或需要专用硬件等问题。在这篇论文中,我们确定了使用备忘录加速Transformer中的自注意机制的机会,而不涉及上述限制。基于一个独特的发现,即推理序列之间存在丰富的注意力计算相似性,我们建立了一个利用新兴大内存系统的备忘录数据库。我们引入了一种新颖的嵌入技术来查找语义类似的输入,以便识别计算相似性。我们还引入了一系列技术,如内存映射和选择性备忘录,以避免内存复制和不必要的开销。我们成功实现了平均推理延迟降低22%(高达68%)并且几乎无损失推理准确度。