This paper determines, through an interdisciplinary law and computer science lens, whether data minimisation and purpose limitation can be meaningfully implemented in data-driven algorithmic systems, including personalisation, profiling and decision-making systems. Our analysis reveals that the two legal principles continue to play an important role in mitigating the risks of personal data processing, allowing us to rebut claims that they have become obsolete. The paper goes beyond this finding, however. We highlight that even though these principles are important safeguards in the systems under consideration, there are important limits to their practical implementation, namely, (i) the difficulties of measuring law and the resulting open computational research questions as well as a lack of concrete guidelines for practitioners; (ii) the unacknowledged trade-offs between various GDPR principles, notably between data minimisation on the one hand and accuracy or fairness on the other; (iii) the lack of practical means of removing personal data from trained models in order to ensure legal compliance; and (iv) the insufficient enforcement of data protection law.
翻译:本文通过跨学科法律和计算机科学透镜,确定数据最小化和目的限制能否在数据驱动算法系统,包括个性化、特征分析和决策系统中得到有意义的实施。我们的分析表明,这两项法律原则在减轻个人数据处理风险方面继续发挥重要作用,使我们能够反驳关于它们已经过时的主张。然而,本文超越了这一结论。我们强调,尽管这些原则是所审议系统中的重要保障,但实际实施仍然有重大限制,即(一) 衡量法律的困难和由此产生的公开计算研究问题,以及缺乏从业人员的具体准则;(二) 国内总产值原则之间未经承认的权衡,特别是数据最小化与准确性或公平性之间的权衡;(三) 缺乏实际手段,将个人数据从经过培训的模型中去除,以确保法律得到遵守;以及(四) 数据保护法执行不力。