Federated Learning (FL) has been demonstrated to be vulnerable to backdoor attacks. As a decentralized machine learning framework, most research focuses on the Single-Label Backdoor Attack (SBA), where adversaries share the same target but neglect the fact that adversaries may be unaware of each other's existence and hold different targets, i.e., Multi-Label Backdoor Attack (MBA). Unfortunately, directly applying prior work to the MBA would not only be ineffective but also potentially mitigate each other. In this paper, we first investigate the limitations of applying previous work to the MBA. Subsequently, we propose M2M, a novel multi-label backdoor attack in federated learning (FL), which adversarially adapts the backdoor trigger to ensure that the backdoored sample is processed as clean target samples in the global model. Our key intuition is to establish a connection between the trigger pattern and the target class distribution, allowing different triggers to activate backdoors along clean activation paths of the target class without concerns about potential mitigation. Extensive evaluations comprehensively demonstrate that M2M outperforms various state-of-the-art attack methods. This work aims to alert researchers and developers to this potential threat and to inspire the design of effective detection methods. Our code will be made available later.
翻译:暂无翻译