Text-Based Person Search (TBPS) is a crucial task in the Internet of Things (IoT) domain that enables accurate retrieval of target individuals from large-scale galleries with only given textual caption. For cross-modal TBPS tasks, it is critical to obtain well-distributed representation in the common embedding space to reduce the inter-modal gap. Furthermore, learning detailed image-text correspondences is essential to discriminate similar targets and enable fine-grained search. To address these challenges, we present a simple yet effective method named Sew Calibration and Masked Modeling (SCMM) that calibrates cross-modal representations by learning compact and well-aligned embeddings. SCMM introduces two novel losses for fine-grained cross-modal representations: Sew calibration loss that aligns image and text features based on textual caption quality, and Masked Caption Modeling (MCM) loss that establishes detailed relationships between textual and visual parts. This dual-pronged strategy enhances feature alignment and cross-modal correspondences, enabling accurate distinction of similar individuals while maintaining a streamlined dual-encoder architecture for real-time inference, which is essential for resource-limited sensors and IoT systems. Extensive experiments on three popular TBPS benchmarks demonstrate the superiority of SCMM, achieving 73.81%, 64.25%, and 57.35% Rank-1 accuracy on CUHK-PEDES, ICFG-PEDES, and RSTPReID, respectively.
翻译:暂无翻译