AI-induced societal harms mirror existing problems in domains where AI replaces or complements traditional methodologies. However, trustworthy AI discourses postulate the homogeneity of AI, aim to derive common causes regarding the harms they generate, and demand uniform human interventions. Such AI monism has spurred legislation for omnibus AI laws requiring any high-risk AI systems to comply with a full, uniform package of rules on fairness, transparency, accountability, human oversight, accuracy, robustness, and security, as demonstrated by the EU AI Regulation and the U.S. draft Algorithmic Accountability Act. However, it is irrational to require high-risk or critical AIs to comply with all the safety, fairness, accountability, and privacy regulations when it is possible to separate AIs entailing safety risks, biases, infringements, and privacy problems. Legislators should gradually adapt existing regulations by categorizing AI systems according to the types of societal harms they induce. Accordingly, this paper proposes the following categorizations, subject to ongoing empirical reassessments. First, regarding intelligent agents, safety regulations must be adapted to address incremental accident risks arising from autonomous behavior. Second, regarding discriminative models, law must focus on the mitigation of allocative harms and the disclosure of marginal effects of immutable features. Third, for generative models, law should optimize developer liability for data mining and content generation, balancing potential social harms arising from infringing content and the negative impact of excessive filtering and identify cases where its non-human identity should be disclosed. Lastly, for cognitive models, data protection law should be adapted to effectively address privacy, surveillance, and security problems and facilitate governance built on public-private partnerships.
翻译:AI-导致的社会危害反映了AI替代或补充传统方法的领域中现有的问题。然而,值得信赖的AI话语主张AI的同质性,旨在推导有关它们所生成的危害的共同原因,并要求统一的人类干预。这种AI唯一主义已经推动颁布普适AI法律的立法,要求任何高风险AI系统遵守公平性、透明性、问责制、人类监督、准确性、鲁棒性和安全性的全套规定,正如欧盟AI法规和美国的草案算法问责法所展示的那样。然而,在可能分离涉及安全风险、偏见、侵权和隐私问题的AI的情况下,要求高风险或关键AI遵守所有安全、公平、问责和隐私规定是不合理的。立法者应该根据AI系统引起的社会危害类型逐步适应现有的规定。因此,本文提出以下分类,按照正在进行的经验重新评估。首先,关于智能代理,安全法规必须适应于自主行为带来的增量事故风险。其次,关于区分模型,法律必须侧重于缓解分配危害和揭示不可变特征的边际效应。第三,对于生成模型,法律应优化开发人员对数据挖掘和内容生成的责任,平衡侵权内容和过度过滤所产生的潜在社会危害和负面影响,并确定应当披露其非人身份的情况。最后,对于认知模型,数据保护法律应该适应于有效处理隐私、监控和安全问题,并促进建立在公私合作伙伴关系基础上的治理。