Nowadays, systems based on machine learning (ML) are widely used in different domains. Given their popularity, ML models have become targets for various attacks. As a result, research at the intersection of security and privacy, and ML has flourished. The research community has been exploring the attack vectors and potential mitigations separately. However, practitioners will likely need to deploy defences against several threats simultaneously. A solution that is optimal for a specific concern may interact negatively with solutions intended to address other concerns. In this work, we explore the potential for conflicting interactions between different solutions that enhance the security/privacy of ML-base systems. We focus on model and data ownership; exploring how ownership verification techniques interact with other ML security/privacy techniques like differentially private training, and robustness against model evasion. We provide a framework, and conduct systematic analysis of pairwise interactions. We show that many pairs are incompatible. Where possible, we provide relaxations to the hyperparameters or the techniques themselves that allow for the simultaneous deployment. Lastly, we discuss the implications and provide guidelines for future work.
翻译:目前,基于机器学习(ML)的系统在不同领域广泛使用。鉴于其广受欢迎,ML模型已成为各种攻击的目标。因此,在安全和隐私交叉点和ML的研究工作已经蓬勃发展。研究界一直在分别探索攻击矢量和可能的缓解措施。然而,从业人员可能需要同时针对几种威胁部署防御系统。一个适合特定关切的解决方案可能与旨在解决其他关切的解决方案产生消极的相互作用。在这项工作中,我们探索加强ML基地系统安全/隐私的不同解决方案之间相互冲突的可能性。我们侧重于模型和数据所有权;探索所有权核查技术如何与其他ML安全/隐私技术(如差异性私人培训)互动,以及防止模式规避的强健性。我们提供了一个框架,对双向互动进行系统分析。我们表明,许多对子是互不相容的。我们尽可能放松超参数或允许同时部署的技术。最后,我们讨论了未来工作的影响和指导方针。