Generative transformer models have become increasingly complex, with large numbers of parameters and the ability to process multiple input modalities. Current methods for explaining their predictions are resource-intensive. Most crucially, they require prohibitively large amounts of extra memory, since they rely on backpropagation which allocates almost twice as much GPU memory as the forward pass. This makes it difficult, if not impossible, to use them in production. We present AtMan that provides explanations of generative transformer models at almost no extra cost. Specifically, AtMan is a modality-agnostic perturbation method that manipulates the attention mechanisms of transformers to produce relevance maps for the input with respect to the output prediction. Instead of using backpropagation, AtMan applies a parallelizable token-based search method based on cosine similarity neighborhood in the embedding space. Our exhaustive experiments on text and image-text benchmarks demonstrate that AtMan outperforms current state-of-the-art gradient-based methods on several metrics while being computationally efficient. As such, AtMan is suitable for use in large model inference deployments.
翻译:生成变压器模型变得日益复杂, 参数众多, 并且能够处理多种输入模式。 目前解释其预测的方法需要大量资源。 最重要的是, 它们需要大量超量的超内存, 因为它们依赖背面反射, 分配的GPU内存量几乎是前方传送量的两倍。 这就使得在生产过程中很难, 即便不是不可能, 也很难使用它们。 我们介绍阿特曼, 几乎不增加成本解释基因变压器模型。 具体地说, 阿特曼是一种模式- 随机扰动方法, 操纵变压器的注意机制为输出预测生成与输入相关的地图。 阿特曼 使用一个基于嵌入空间中相近区域的平行的象征性搜索方法。 我们对文本和图像文本基准的详尽实验表明, AtMan 在计算效率的同时, 在大型模型部署中使用了阿特曼 。