\emph{The Right to Explanation} and \emph{the Right to be Forgotten} are two important principles outlined to regulate algorithmic decision making and data usage in real-world applications. While the right to explanation allows individuals to request an actionable explanation for an algorithmic decision, the right to be forgotten grants them the right to ask for their data to be deleted from all the databases and models of an organization. Intuitively, enforcing the right to be forgotten may trigger model updates which in turn invalidate previously provided explanations, thus violating the right to explanation. In this work, we investigate the technical implications arising due to the interference between the two aforementioned regulatory principles, and propose \emph{the first algorithmic framework} to resolve the tension between them. To this end, we formulate a novel optimization problem to generate explanations that are robust to model updates due to the removal of training data instances by data deletion requests. We then derive an efficient approximation algorithm to handle the combinatorial complexity of this optimization problem. We theoretically demonstrate that our method generates explanations that are provably robust to worst-case data deletion requests with bounded costs in case of linear models and certain classes of non-linear models. Extensive experimentation with real-world datasets demonstrates the efficacy of the proposed framework.
翻译:\ emph{ 解释权} 和\ emph{ 被遗忘的权利 ) 是两大重要原则, 旨在规范算法决策和数据在现实世界应用中的使用。 虽然解释权允许个人请求对算法决定提出可操作的解释, 但被遗忘的权利赋予了要求从组织的所有数据库和模型中删除数据的权利。 直觉地说, 强制执行被遗忘的权利可能会触发模式更新, 而这反过来又会使先前提供的解释无效, 从而侵犯解释权。 在这项工作中, 我们调查上述两项管理原则之间的干扰所产生的技术影响, 并提议 \ emph{ 第一个算法框架} 来解决他们之间的紧张关系。 为此, 我们提出一个新的优化问题, 以产生解释, 以建立因删除数据而删除培训数据案例而导致的模型更新。 然后, 我们得出一个高效的近似算法, 来处理这种优化问题的组合复杂性。 我们理论上证明, 我们的方法产生的解释非常可靠, 以最坏的情况删除要求, 在线性模型和某些不广度的数据模型中, 展示真实性模型。