Data drift is a thorny challenge when deploying person re-identification (ReID) models into real-world devices, where the data distribution is significantly different from that of the training environment and keeps changing. To tackle this issue, we propose a federated spatial-temporal incremental learning approach, named FedSTIL, which leverages both lifelong learning and federated learning to continuously optimize models deployed on many distributed edge clients. Unlike previous efforts, FedSTIL aims to mine spatial-temporal correlations among the knowledge learnt from different edge clients. Specifically, the edge clients first periodically extract general representations of drifted data to optimize their local models. Then, the learnt knowledge from edge clients will be aggregated by centralized parameter server, where the knowledge will be selectively and attentively distilled from spatial- and temporal-dimension with carefully designed mechanisms. Finally, the distilled informative spatial-temporal knowledge will be sent back to correlated edge clients to further improve the recognition accuracy of each edge client with a lifelong learning method. Extensive experiments on a mixture of five real-world datasets demonstrate that our method outperforms others by nearly 4% in Rank-1 accuracy, while reducing communication cost by 62%. All implementation codes are publicly available on https://github.com/MSNLAB/Federated-Lifelong-Person-ReID
翻译:在将个人再识别(ReID)模型投入现实世界设备时,数据流是一个棘手的挑战,因为数据分布与培训环境大不相同,并不断变化。为了解决这一问题,我们提议采用名为FedSTIL的联盟空间时进化学习方法,即FedSTIL,利用终身学习和联合学习,不断优化在许多分布边缘客户中部署的模式。与以往的努力不同,FedSTIL的目标是在从不同边缘客户所学到的知识中消除空间时空关系。具体地说,边缘客户首先定期提取漂移数据的一般表述,以优化其本地模式。然后,从边缘客户学到的知识将由中央参数服务器汇总,在中央参数服务器上有选择和仔细地从空间和时空分解中提取知识,并精心设计机制。最后,蒸馏的信息空间时进空间-时空知识将被送回相关的边端客户,以终生学习方法进一步提高每个边缘客户的认知准确度。五种真实世界数据集的混合实验显示,我们的方法比其他人公开的准确度要高出近4%的REB/REMERM/RISL的准确度。通过降低通信成本。