Machine learning (ML) models trained on data from potentially untrusted sources are vulnerable to poisoning. A small, maliciously crafted subset of the training inputs can cause the model to learn a "backdoor" task (e.g., misclassify inputs with a certain feature) in addition to its main task. While backdoor attacks remain largely a hypothetical threat, state-of-the-art defenses require massive changes to the existing ML pipelines and are too complex for practical deployment. In this paper, we take a pragmatic view and investigate natural resistance of ML pipelines to backdoor attacks, i.e., resistance that can be achieved without changes to how models are trained. We design, implement, and evaluate Mithridates, a new method that helps practitioners answer two actionable questions: (1) how well does my model resist backdoor poisoning attacks?, and (2) how can I increase its resistance without changing the training pipeline? Mithridates leverages hyperparameter search $\unicode{x2013}$ a tool that ML developers already extensively use $\unicode{x2013}$ to balance the model's accuracy and resistance to backdoor learning, without disruptive changes to the pipeline. We show that hyperparameters found by Mithridates increase resistance to multiple types of backdoor attacks by 3-5x with only a slight impact on model accuracy. We also discuss extensions to AutoML and federated learning.
翻译:暂无翻译