Recent digital rights frameworks stipulate an ability (e.g., the ``right to be forgotten'' embodied in the GDPR and CCPA) to request that a \textit{data controller} -- roughly, any system that stores and manipulates personal information -- \emph{delete} one's data. We ask how deletion should be formalized in complex systems that interact with many parties and store derivative information. There are two broad principles at work in existing approaches to formalizing deletion: \emph{confidentiality} and \emph{control}. We build a unified formalism for deletion that encompasses previous approaches as special cases. We argue that existing work on deletion-as-control (in particular, ``machine unlearning'') can be reframed in the language of (adaptive) history independence, for which we provide a general definition. We also show classes of controllers, such as those that publish differentially private statistics without later updating them, that satisfy intuitive notions of deletion and our formal definition, but which fall outside the scope of previous approaches.
翻译:最近的数字权利框架规定了一种能力(例如,在GDPR和CCPA中体现的“被遗忘的权利” ), 来要求 \ textit{ data control} -- 大致而言, 任何储存和操作个人信息的系统 -- \ emph{delete} 个人的数据。 我们询问如何在与许多当事方互动和储存衍生信息的复杂系统中正式删除。 在将删除正规化的现有办法中,有两大原则:\ emph{ 机密} 和\ emph{ control}。 我们为删除建立了统一的形式主义,将先前的办法包括作为特殊情况。 我们认为,现有的删除控制工作(特别是“机器不学习”)可以重新设置在(改造)历史独立性的语言中,对此我们提供了一般定义。 我们还展示了控制者类别,例如那些公布差别化的私人统计数据而不在以后更新它们、满足删除的直观概念和我们的正式定义,但不属于先前办法的范围。