Recently, Vision Transformer (ViT) has continuously established new milestones in the computer vision field, while the high computation and memory cost makes its propagation in industrial production difficult. Pruning, a traditional model compression paradigm for hardware efficiency, has been widely applied in various DNN structures. Nevertheless, it stays ambiguous on how to perform exclusive pruning on the ViT structure. Considering three key points: the structural characteristics, the internal data pattern of ViTs, and the related edge device deployment, we leverage the input token sparsity and propose a computation-aware soft pruning framework, which can be set up on vanilla Transformers of both flatten and CNN-type structures, such as Pooling-based ViT (PiT). More concretely, we design a dynamic attention-based multi-head token selector, which is a lightweight module for adaptive instance-wise token selection. We further introduce a soft pruning technique, which integrates the less informative tokens generated by the selector module into a package token that will participate in subsequent calculations rather than being completely discarded. Our framework is bound to the trade-off between accuracy and computation constraints of specific edge devices through our proposed computation-aware training strategy. Experimental results show that our framework significantly reduces the computation cost of ViTs while maintaining comparable performance on image classification. Moreover, our framework can guarantee the identified model to meet resource specifications of mobile devices and FPGA, and even achieve the real-time execution of DeiT-T on mobile platforms. For example, our method reduces the latency of DeiT-T to 26 ms (26%$\sim $41% superior to existing works) on the mobile device with 0.25%$\sim $4% higher top-1 accuracy on ImageNet. Our code will be released soon.
翻译:最近,愿景变异器(Viet Greener (Viet) 不断在计算机视觉领域建立新的里程碑,而高计算和记忆成本则使其难以在工业生产中传播。普鲁宁(一种用于硬件效率的传统模型压缩模式范例)已被广泛应用于各种 DNN 结构。然而,它对于如何在 ViT 结构上进行独家裁剪仍然模棱两可。考虑到三个关键点:结构特征、 ViT 的内部数据模式和相关边缘装置部署,我们利用输入信号的微调,并提议一个计算-觉知的软调整框架,这个框架可以设置在平坦和CNN型结构的香草型结构(例如基于聚餐的 ViT (PiT) 的节能和软质调变变变变变工具上。我们设计了一个基于动态关注的多头牌选择器,这是一个轻量模块,用于根据实例选择。我们进一步引入软化的裁剪裁剪动技术,将模型选价元模块产生的信息性象征立即纳入随后的计算,而不是被完全废弃。我们的框架将连接到交易中,让我们的服务器-lex-loval-leval-leval-lex-de-de-de-de-deal-deal-deal-deal-de-deal-deal-deal-deal-deal-deal-de-deal-deal-de-de-deal-deal-deal-la-la-deal-laxxxxxxxx