Transformer in computer vision has recently shown encouraging progress. In this work, we improve the original Pyramid Vision Transformer (PVTv1) by adding three improvement designs, which include (1) overlapping patch embedding, (2) convolutional feed-forward networks, and (3) linear complexity attention layers. With these simple modifications, our PVTv2 significantly improves PVTv1 on classification, detection, and segmentation. Moreover, PVTv2 achieves better performance than recent works, including Swin Transformer. We hope this work will make state-of-the-art vision Transformer research more accessible. Code is available at https://github.com/whai362/PVT .
翻译:在这项工作中,我们改进了原金字塔愿景变异器(PVTv1),增加了三项改进设计,其中包括:(1) 重叠的补丁嵌入,(2) 进料向前网络,(3) 线性复杂关注层。有了这些简单的修改,我们的PVTv2在分类、检测和分割方面大大改进了PVTV1。此外,PVTv2的绩效比最近的工程(包括Swin变异器)要好。我们希望这项工作将使最先进的视觉变异器研究更容易获得。代码可在https://github.com/whai362/PVT上查阅。