Accurate fisheries data are crucial for effective and sustainable marine resource management. With the recent adoption of Electronic Monitoring (EM) systems, more video data is now being collected than can be feasibly reviewed manually. This paper addresses this challenge by developing an optimized deep learning pipeline for automated fish re-identification (Re-ID) using the novel AutoFish dataset, which simulates EM systems with conveyor belts with six similarly looking fish species. We demonstrate that key Re-ID metrics (R1 and mAP@k) are substantially improved by using hard triplet mining in conjunction with a custom image transformation pipeline that includes dataset-specific normalization. By employing these strategies, we demonstrate that the Vision Transformer-based Swin-T architecture consistently outperforms the Convolutional Neural Network-based ResNet-50, achieving peak performance of 41.65% mAP@k and 90.43% Rank-1 accuracy. An in-depth analysis reveals that the primary challenge is distinguishing visually similar individuals of the same species (Intra-species errors), where viewpoint inconsistency proves significantly more detrimental than partial occlusion. The source code and documentation are available at: https://github.com/msamdk/Fish_Re_Identification.git
翻译:准确的渔业数据对于有效且可持续的海洋资源管理至关重要。随着电子监测(EM)系统的近期采用,目前收集的视频数据量已超出人工可行审阅的范围。本文通过开发一种优化的深度学习流程来解决这一挑战,该流程利用新型AutoFish数据集实现自动化的鱼类重识别(Re-ID)。该数据集模拟了配备传送带的EM系统,包含六种外观相似的鱼类物种。我们证明,通过结合困难三元组挖掘与包含数据集特定归一化的自定义图像变换流程,关键重识别指标(R1和mAP@k)得到显著提升。采用这些策略后,我们证明基于视觉Transformer的Swin-T架构持续优于基于卷积神经网络的ResNet-50,实现了41.65%的mAP@k峰值性能和90.43%的Rank-1准确率。深入分析表明,主要挑战在于区分同一物种中视觉相似的个体(种内误差),其中视角不一致的影响显著大于部分遮挡。源代码及文档已发布于:https://github.com/msamdk/Fish_Re_Identification.git