Deploying high-performance vision transformer (ViT) models on ubiquitous Internet of Things (IoT) devices to provide high-quality vision services will revolutionize the way we live, work, and interact with the world. Due to the contradiction between the limited resources of IoT devices and resource-intensive ViT models, the use of cloud servers to assist ViT model training has become mainstream. However, due to the larger number of parameters and floating-point operations (FLOPs) of the existing ViT models, the model parameters transmitted by cloud servers are large and difficult to run on resource-constrained IoT devices. To this end, this paper proposes a transmission-friendly ViT model, TFormer, for deployment on resource-constrained IoT devices with the assistance of a cloud server. The high performance and small number of model parameters and FLOPs of TFormer are attributed to the proposed hybrid layer and the proposed partially connected feed-forward network (PCS-FFN). The hybrid layer consists of nonlearnable modules and a pointwise convolution, which can obtain multitype and multiscale features with only a few parameters and FLOPs to improve the TFormer performance. The PCS-FFN adopts group convolution to reduce the number of parameters. The key idea of this paper is to propose TFormer with few model parameters and FLOPs to facilitate applications running on resource-constrained IoT devices to benefit from the high performance of the ViT models. Experimental results on the ImageNet-1K, MS COCO, and ADE20K datasets for image classification, object detection, and semantic segmentation tasks demonstrate that the proposed model outperforms other state-of-the-art models. Specifically, TFormer-S achieves 5% higher accuracy on ImageNet-1K than ResNet18 with 1.4$\times$ fewer parameters and FLOPs.
翻译:在无处不在的Tings互联网(IoT)设备上部署高性能视觉变压器(ViT)模型以提供高质量的视觉服务。由于IoT设备的资源有限与资源密集的ViT模型之间存在矛盾,使用云服务器协助ViT模型培训已成为主流。然而,由于现有的ViT模型的参数和浮点操作(FLOPs)数量较多,云端服务器传输的图像模型参数非常大,难以在资源控制过的ToT设备上运行。为此,本文提议了一个便于传输的 ViT 模型模型(TFormer),在云端服务器的协助下,在资源密集的 IoT 设备上部署。使用高性能和少量模型参数以及TFormerer的FLOPs, 与部分连接的FS-ODFROD(PS-FRODR) 相比, 混合结构由不可读的模块和点端点变量(T-VIO-VT),可以获取多式和多级的SOFFF的关键性能功能,只能通过一些参数和硬质化的FL 。