What happens when a machine learning dataset is deprecated for legal, ethical, or technical reasons, but continues to be widely used? In this paper, we examine the public afterlives of several prominent deprecated or redacted datasets, including ImageNet, 80 Million Tiny Images, MS-Celeb-1M, Duke MTMC, Brainwash, and HRT Transgender, in order to inform a framework for more consistent, ethical, and accountable dataset deprecation. Building on prior research, we find that there is a lack of consistency, transparency, and centralized sourcing of information on the deprecation of datasets, and as such, these datasets and their derivatives continue to be cited in papers and circulate online. These datasets that never die -- which we term "zombie datasets" -- continue to inform the design of production-level systems, causing technical, legal, and ethical challenges; in so doing, they risk perpetuating the harms that prompted their supposed withdrawal, including concerns around bias, discrimination, and privacy. Based on this analysis, we propose a Dataset Deprecation Framework that includes considerations of risk, mitigation of impact, appeal mechanisms, timeline, post-deprecation protocol, and publication checks that can be adapted and implemented by the machine learning community. Drawing on work on datasheets and checklists, we further offer two sample dataset deprecation sheets and propose a centralized repository that tracks which datasets have been deprecated and could be incorporated into the publication protocols of venues like NeurIPS.
翻译:当机器学习数据集由于法律、道德或技术原因被贬低,但继续被广泛使用时会发生什么情况?在本文件中,我们检查了几个显著腐蚀或修改的数据集,包括图像网、8 000万小图像、MS-Celeb-1M、Duke MTM、MTMC、洗脑和HRT Transgender等,以便为更一致、道德和问责的数据集拆解提供一个框架。在以往研究的基础上,我们发现缺乏一致性、透明度和中央集中的关于数据集拆解的信息来源,因此,这些数据集及其衍生物继续被引用在文件中并在线传播。这些从未死过的数据集 -- -- 我们称之为“僵尸数据集”的数据集继续通报生产级系统的设计,造成技术、法律和道德挑战;因此,它们有可能使预设的数据集退缩,包括围绕偏见、歧视和隐私的担忧。基于这一分析,我们提议了一个数据评析、缩略、缩缩缩版框架可以包含风险因素的版本。