Nowadays, systems based on machine learning (ML) are widely used in different domains. Given their popularity, ML models have become targets for various attacks. As a result, research at the intersection of security/privacy and ML has flourished. Typically such work has focused on individual types of security/privacy concerns and mitigations thereof. However, in real-life deployments, an ML model will need to be protected against several concerns simultaneously. A protection mechanism optimal for one security or privacy concern may interact negatively with mechanisms intended to address other concerns. Despite its practical relevance, the potential for such conflicts has not been studied adequately. We first provide a framework for analyzing such "conflicting interactions". We then focus on systematically analyzing pairwise interactions between protection mechanisms for one concern, model and data ownership verification, with two other classes of ML protection mechanisms: differentially private training, and robustness against model evasion. We find that several pairwise interactions result in conflicts. We explore potential approaches for avoiding such conflicts. First, we study the effect of hyperparameter relaxations, finding that there is no sweet spot balancing the performance of both protection mechanisms. Second, we explore if modifying one type of protection mechanism (ownership verification) so as to decouple it from factors that may be impacted by a conflicting mechanism (differentially private training or robustness to model evasion) can avoid conflict. We show that this approach can avoid the conflict between ownership verification mechanisms when combined with differentially private training, but has no effect on robustness to model evasion. Finally, we identify the gaps in the landscape of studying interactions between other types of ML protection mechanisms.
翻译:目前,基于机器学习(ML)的系统在不同领域广泛使用。鉴于其受欢迎程度,ML模式已成为各种攻击的目标。因此,对安全/隐私和ML交叉点的研究已经蓬勃发展。这类工作通常侧重于单个类型的安全/隐私关切及其缓解。然而,在实际部署中,需要同时保护基于机器学习(ML)的系统,同时防止若干关切。一个安全或隐私关切的最佳保护机制可能与其他旨在解决其他关切的机制产生消极的相互作用。尽管其实际相关性,这种冲突的可能性尚未得到充分研究。我们首先为分析这种 " 冲突互动 " 提供了一个框架。然后,我们侧重于系统分析针对一个关切、模式和数据所有权核查的保护机制之间的对对等互动以及另外两类ML保护机制:有差异的私人培训,强健健健。我们发现,一些双向互动模式可能会导致冲突。我们探索避免这种冲突的潜在方法。我们研究超标准放松冲突的效果,但发现在两种保护机制的相互性之间没有平衡。我们先研究这种机制的甜点,然后我们探讨一种保护机制如何改变一种保护机制,最终避免一种机制。我们从一种形式的培训,从而避免了一种机制。