In machine learning (ML) security, attacks like evasion, model stealing or membership inference are generally studied in individually. Previous work has also shown a relationship between some attacks and decision function curvature of the targeted model. Consequently, we study an ML model allowing direct control over the decision surface curvature: Gaussian Process classifiers (GPCs). For evasion, we find that changing GPC's curvature to be robust against one attack algorithm boils down to enabling a different norm or attack algorithm to succeed. This is backed up by our formal analysis showing that static security guarantees are opposed to learning. Concerning intellectual property, we show formally that lazy learning does not necessarily leak all information when applied. In practice, often a seemingly secure curvature can be found. For example, we are able to secure GPC against empirical membership inference by proper configuration. In this configuration, however, the GPC's hyper-parameters are leaked, e.g. model reverse engineering succeeds. We conclude that attacks on classification should not be studied in isolation, but in relation to each other.
翻译:在机器学习(ML)安全方面,一般会单独研究诸如逃学、盗窃模型或会籍推断等攻击,通常都是单独研究的。以前的工作还显示,某些攻击与定向模型的决定功能曲线之间的关系。因此,我们研究了一个允许直接控制决定表面曲线的ML模型:高西亚进程分类(GPCs ) 。关于逃生,我们发现,改变GPC的曲线对于一种攻击算法是强大的,最终导致不同的标准或攻击算法成功。这得到我们的正式分析的支持,表明静态安全保障与学习相悖。关于知识产权,我们正式表明,懒惰的学习不一定泄漏所有信息。在实践中,往往能找到一种看似安全的曲线。例如,我们能够确保GPC不受通过适当配置的实验性归属的影响。然而,在这种配置中,GPC的超参数被泄露,例如模型反向工程成功。我们的结论是,不应当孤立地研究对分类的攻击,而应相互关联。