This paper investigates the problem of class-incremental object detection for agricultural applications where a model needs to learn new plant species and diseases incrementally without forgetting the previously learned ones. We adapt two public datasets to include new categories over time, simulating a more realistic and dynamic scenario. We then compare three class-incremental learning methods that leverage different forms of knowledge distillation to mitigate catastrophic forgetting. Our experiments show that all three methods suffer from catastrophic forgetting, but the recent Dynamic Y-KD approach, which additionally uses a dynamic architecture that grows new branches to learn new tasks, outperforms ILOD and Faster-ILOD in most scenarios both on new and old classes. These results highlight the challenges and opportunities of continual object detection for agricultural applications. In particular, the large intra-class and small inter-class variability that is typical of plant images exacerbate the difficulty of learning new categories without interfering with previous knowledge. We publicly release our code to encourage future work.
翻译:本文研究了农业应用中的类增量目标检测问题,其中模型需要逐步学习新的植物物种和疾病,而不会忘记先前学习的。我们适应了两个公共数据集,随着时间的推移包括了新的类别,从而模拟更为真实和动态的情境。然后,我们比较了三种利用不同形式的知识蒸馏的类增量学习方法,以减轻灾难性遗忘的影响。我们的实验表明,所有三种方法都存在灾难性遗忘的问题,但最近的动态Y-KD方法,通过使用动态架构,在学习新任务时增加新分支,一般在新旧类别上都优于ILOD和Faster-ILOD。这些结果突显了农业应用中连续目标检测的挑战与机会。特别地,植物图像的大内类别和小间类别变异性加剧了在不影响先前知识的情况下学习新类别的难度。我们公开发布了我们的代码,以鼓励未来的工作。