We introduce Opacus, a free, open-source PyTorch library for training deep learning models with differential privacy (hosted at opacus.ai). Opacus is designed for simplicity, flexibility, and speed. It provides a simple and user-friendly API, and enables machine learning practitioners to make a training pipeline private by adding as little as two lines to their code. It supports a wide variety of layers, including multi-head attention, convolution, LSTM, GRU (and generic RNN), and embedding, right out of the box and provides the means for supporting other user-defined layers. Opacus computes batched per-sample gradients, providing higher efficiency compared to the traditional "micro batch" approach. In this paper we present Opacus, detail the principles that drove its implementation and unique features, and benchmark it against other frameworks for training models with differential privacy as well as standard PyTorch.
翻译:我们引入了Opacus(一个自由、开放源码的PyTorrch图书馆),用于培训隐私差异的深层次学习模式(在Opacus.ai主办),Opacus的设计是简单、灵活和快速的,提供简单和方便用户的API,使机器学习实践者能够通过在其代码中增加两条小条小条小条小条小条来使培训管道成为私人管道。它支持各种各样的层次,包括多头关注、共和、LSTM、GRU(和通用RNN),以及嵌入,直接从盒子中提供,并提供了支持其他用户定义层的手段。Opacus计算了每串样本梯子,比传统的“微批量”方法提供了更高的效率。我们在此文件中介绍Opacus,详细说明了推动其实施的原则和独特特征,并参照具有不同隐私的培训模式的其他框架以及标准的PyTorch。