AI-induced societal harms mirror existing problems in domains where AI replaces or complements traditional methodologies. However, trustworthy AI discourses postulate the homogeneity of AI, aim to derive common causes regarding the harms they generate, and demand uniform human interventions. Such AI monism has spurred legislation for omnibus AI laws requiring any high-risk AI systems to comply with a full, uniform package of rules on fairness, transparency, accountability, human oversight, accuracy, robustness, and security, as demonstrated by the EU AI Regulation and the U.S. draft Algorithmic Accountability Act. However, it is irrational to require high-risk or critical AIs to comply with all the safety, fairness, accountability, and privacy regulations when it is possible to separate AIs entailing safety risks, biases, infringements, and privacy problems. Legislators should gradually adapt existing regulations by categorizing AI systems according to the types of societal harms they induce. Accordingly, this paper proposes the following categorizations, subject to ongoing empirical reassessments. First, regarding intelligent agents, safety regulations must be adapted to address incremental accident risks arising from autonomous behavior. Second, regarding discriminative models, law must focus on the mitigation of allocative harms and the disclosure of marginal effects of immutable features. Third, for generative models, law should optimize developer liability for data mining and content generation, balancing potential social harms arising from infringing content and the negative impact of excessive filtering and identify cases where its non-human identity should be disclosed. Lastly, for cognitive models, data protection law should be adapted to effectively address privacy, surveillance, and security problems and facilitate governance built on public-private partnerships.
翻译:AI(人工智能)引发的社会风险反映了AI在替代或补充传统方法的领域中存在的问题,然而值得信赖的AI话语将AI视为同质化,并旨在推导出它们所产生的风险的共同原因,并要求进行统一的人类干预。这种AI单一论引发了制定全面AI法律的立法行动,要求任何高风险AI系统严格遵守公平、透明、负责任、人类监督、准确性、稳健性和安全方面的规定,这已在欧盟AI法规和美国草案算法问责法中得到体现。然而,要求高风险或关键AI符合所有安全、公平、责任和隐私法规是不合理的,因为将涉及安全风险、偏差、侵权和隐私问题的AI进行分离是有可能的。立法者应通过将AI系统分类为其所引发的社会风险的类型逐步改善现有的法规。因此,本文提出以下分类,其标准将随着不断的经验重新评估而确定。首先,对于智能代理,安全法规必须针对自主行为引起的渐进事故风险进行调整。第二,针对辨别模型,法律必须着眼于减轻调配风险,并揭示不可变特征的边缘效应。第三,对于生成模型而言,法律应优化开发人员在数据挖掘和内容生成方面的责任,平衡由侵权内容产生的潜在社会危害和过度过滤的负面影响,并确定其非人类身份需披露的情况。最后,对于认知模型,数据保护法律应作出调整,以有效地解决隐私、监控和安全问题,并促进建立在公私合作伙伴关系基础上的治理。