We present secml, an open-source Python library for secure and explainable machine learning. It implements the most popular attacks against machine learning, including not only test-time evasion attacks to generate adversarial examples against deep neural networks, but also training-time poisoning attacks against support vector machines and many other algorithms. These attacks enable evaluating the security of learning algorithms and of the corresponding defenses under both white-box and black-box threat models. To this end, secml provides built-in functions to compute security evaluation curves, showing how quickly classification performance decreases against increasing adversarial perturbations of the input data. secml also includes explainability methods to help understand why adversarial attacks succeed against a given model, by visualizing the most influential features and training prototypes contributing to each decision. It is distributed under the Apache License 2.0, and hosted at https://gitlab.com/secml/secml.
翻译:我们介绍一个开放源码的Python图书馆,用于安全和可解释的机器学习。它针对机器学习实施最受欢迎的攻击,其中不仅包括测试性规避攻击,以生成针对深神经网络的对抗性实例,还包括针对支持矢量机和许多其他算法的培训性中毒攻击。这些攻击有助于评估学习算法和白箱和黑盒威胁模型下的相应防御的安全性。为此,cython图书馆提供计算安全评价曲线的内在功能,显示相对于不断上升的对立输入数据干扰,分类性能下降的速度。 Cleml还包括解释性方法,帮助理解为什么对抗性攻击对特定模式的成功,通过直观了解对每项决定贡献的最有影响力的特点和培训原型。根据Apache牌许可证2.0分发,并在https://gitlab.com/secml/secml主持。