Traffic sign detection is a vital task in the visual system of self-driving cars and the automated driving system. Recently, novel Transformer-based models have achieved encouraging results for various computer vision tasks. We still observed that vanilla ViT could not yield satisfactory results in traffic sign detection because the overall size of the datasets is very small and the class distribution of traffic signs is extremely unbalanced. To overcome this problem, a novel Pyramid Transformer with locality mechanisms is proposed in this paper. Specifically, Pyramid Transformer has several spatial pyramid reduction layers to shrink and embed the input image into tokens with rich multi-scale context by using atrous convolutions. Moreover, it inherits an intrinsic scale invariance inductive bias and is able to learn local feature representation for objects at various scales, thereby enhancing the network robustness against the size discrepancy of traffic signs. The experiments are conducted on the German Traffic Sign Detection Benchmark (GTSDB). The results demonstrate the superiority of the proposed model in the traffic sign detection tasks. More specifically, Pyramid Transformer achieves 75.6% mAP in GTSDB when applied to the Cascade RCNN as the backbone and surpassing most well-known and widely used SOTAs.
翻译:自动驾驶汽车和自动驾驶系统的视觉系统中,交通标志检测是一项至关重要的任务。最近,基于新型变异器的新型模型在各种计算机视觉任务中取得了令人鼓舞的效果。我们仍然观察到,香草ViT无法在交通标志检测方面产生令人满意的结果,因为数据集的总体规模非常小,交通标志的等级分布极不平衡。为解决这一问题,本文件提议了一个新的带有地点机制的金字塔变异器。具体来说,金字塔变异器有几个空间金字塔级缩小层,以便通过使用微变变形,将输入图像缩缩和嵌入具有丰富多尺度背景的标志中。此外,它继承了一个内在规模的不易感动偏差,并且能够在不同尺度上学习物体的本地特征代表,从而增强网络对交通标志大小差异的稳健性。在德国交通标志检测基准(GTSDB)上进行了实验。结果显示,拟议的模型在交通标志检测任务中具有优越性。更具体地说,在应用最著名的SOCANN和最著名的骨架时, Pyramiramider在GTTDB中取得了75.6 % mAP。