Machine-Learning-as-a-Service providers expose machine learning (ML) models through application programming interfaces (APIs) to developers. Recent work has shown that attackers can exploit these APIs to extract good approximations of such ML models, by querying them with samples of their choosing. We propose VarDetect, a stateful monitor that tracks the distribution of queries made by users of such a service, to detect model extraction attacks. Harnessing the latent distributions learned by a modified variational autoencoder, VarDetect robustly separates three types of attacker samples from benign samples, and successfully raises an alarm for each. Further, with VarDetect deployed as an automated defense mechanism, the extracted substitute models are found to exhibit poor performance and transferability, as intended. Finally, we demonstrate that even adaptive attackers with prior knowledge of the deployment of VarDetect, are detected by it.
翻译:通过应用编程接口(APIs),机器学习服务供应商向开发商暴露了机器学习模式。最近的工作表明,攻击者可以通过使用自己选择的样本,利用这些API来获取这种ML模型的良好近似值。我们提议VarSetro,这是一个跟踪这种服务用户所提询问分布情况的有声监视器,以探测模型提取攻击。利用经过修改的变异自动编码器所学到的潜在分布,VarSeta将三种攻击者样本与良性样本严格分离,并成功地为每一种样本发出警报。此外,由于VarSeta作为一种自动防御机制被部署,所提取的替代模型被发现显示性能和可转移性较差。最后,我们证明即使事先了解VarSet部署情况的适应性攻击者也被它检测出来。