In the SSLAD-Track 3B challenge on continual learning, we propose the method of COntinual Learning with Transformer (COLT). We find that transformers suffer less from catastrophic forgetting compared to convolutional neural network. The major principle of our method is to equip the transformer based feature extractor with old knowledge distillation and head expanding strategies to compete catastrophic forgetting. In this report, we first introduce the overall framework of continual learning for object detection. Then, we analyse the key elements' effect on withstanding catastrophic forgetting in our solution. Our method achieves 70.78 mAP on the SSLAD-Track 3B challenge test set.
翻译:在持续学习的SSLAD-Track 3B挑战中,我们提出了与变异器(COLT)一起学习Continual Learning(COLT)的方法。我们发现变异器与进化神经网络相比,受灾难性遗忘的影响较少。我们方法的主要原则是使变异器基于特性的提取器配备旧知识蒸馏器,并带头扩大战略,以对抗灾难性的遗忘。我们在本报告中首先引入了持续学习以探测物体的整体框架。然后,我们分析了关键要素对我们解决方案中长期灾难性遗忘的影响。我们的方法在 SLAD-Tracrack 3B挑战测试集上实现了7078 mAP。