With the swift advancement of deep learning, state-of-the-art algorithms have been utilized in various social situations. Nonetheless, some algorithms have been discovered to exhibit biases and provide unequal results. The current debiasing methods face challenges such as poor utilization of data or intricate training requirements. In this work, we found that the backdoor attack can construct an artificial bias similar to the model bias derived in standard training. Considering the strong adjustability of backdoor triggers, we are motivated to mitigate the model bias by carefully designing reverse artificial bias created from backdoor attack. Based on this, we propose a backdoor debiasing framework based on knowledge distillation, which effectively reduces the model bias from original data and minimizes security risks from the backdoor attack. The proposed solution is validated on both image and structured datasets, showing promising results. This work advances the understanding of backdoor attacks and highlights its potential for beneficial applications. The code for the study can be found at \url{https://anonymous.4open.science/r/DwB-BC07/}.
翻译:随着深层次学习的迅速发展,在各种社会情况中采用了最先进的算法,然而,还是发现了一些算法,以显示偏见和提供不平等的结果。目前的贬低方法面临着数据利用不善或训练要求复杂等挑战。在这项工作中,我们发现后门攻击可以形成类似于标准培训模式偏见的人工偏见。考虑到后门触发器的强大可调整性,我们通过仔细设计后门攻击产生的反向人为偏差来减少模型偏差。在此基础上,我们提议了一个基于知识蒸馏的后门偏差框架,有效地减少原始数据的模型偏差,并尽量减少后门攻击产生的安全风险。拟议的解决办法在图像和结构数据集上都得到验证,显示出有希望的结果。这项工作提高了对后门攻击的理解,并突出了其有利应用的潜力。研究的代码可以在\url{https://anonimous4.open.science/r/DwB-BC07/}找到。</s>