Machine unlearning is the process through which a deployed machine learning model forgets about one of its training data points. While naively retraining the model from scratch is an option, it is almost always associated with a large computational effort for deep learning models. Thus, several approaches to approximately unlearn have been proposed along with corresponding metrics that formalize what it means for a model to forget about a data point. In this work, we first taxonomize approaches and metrics of approximate unlearning. As a result, we identify verification error, i.e., the L2 difference between the weights of an approximately unlearned and a naively retrained model, as a metric approximate unlearning should optimize for as it implies a large class of other metrics. We theoretically analyze the canonical stochastic gradient descent (SGD) training algorithm to surface the variables which are relevant to reducing the verification error of approximate unlearning for SGD. From this analysis, we first derive an easy-to-compute proxy for verification error (termed unlearning error). The analysis also informs the design of a new training objective penalty that limits the overall change in weights during SGD and as a result facilitates approximate unlearning with lower verification error. We validate our theoretical work through an empirical evaluation on CIFAR-10, CIFAR-100, and IMDB sentiment analysis.
翻译:机器不学习是被部署的机器学习模型忘记其一个培训数据点的过程。 在天真地从零开始对模型进行再培训是一个选择,但几乎总是与深层学习模型的大规模计算努力相联系。 因此,我们提出了几种大约未读的方法,同时提出了相应的衡量标准,正式确定模型意味着什么可以忘记一个数据点。 在这项工作中,我们首先对方法和大致未学习的衡量方法进行分类。结果,我们找出了核查错误,即,从零开始对模型进行天真的再培训的模型和粗略未学模型的重量之间的L2差异,因为衡量的粗略未学习应优化,因为它意味着大量的其它计量标准。我们从理论上分析了粗略的可读性梯度梯度(SGD)培训算法,以显示与减少为SGD进行大约未学习的核查错误有关的变量。我们首先从这一分析中得出了核查错误的简单化的替代方法(确定为不学习错误 ) 。分析还表明设计一种新的培训目标惩罚,限制我们通过SGDM和I-10年的理论分析,我们通过SGDFAR的低度校验测测测测测算。 和结果,我们用CI-10-RC-RC-C-RC-RC-C-C-C-C-C-C-C-C-C-C-SAR-SDAR-ADAR-ADAR-S-S-A/I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I