Various forms of implications of artificial intelligence that either exacerbate or decrease racial systemic injustice have been explored in this applied research endeavor. Taking each thematic area of identifying, analyzing, and debating an systemic issue have been leveraged in investigating merits and drawbacks of using algorithms to automate human decision making in racially sensitive environments. It has been asserted through the analysis of historical systemic patterns, implicit biases, existing algorithmic risks, and legal implications that natural language processing based AI, such as risk assessment tools, have racially disparate outcomes. It is concluded that more litigative policies are needed to regulate and restrict how internal government institutions and corporations utilize algorithms, privacy and security risks, and auditing requirements in order to diverge from racially injustice outcomes and practices of the past.
翻译:在这项应用研究中,探讨了加剧或减少种族系统性不公正现象的人工智能的各种形式影响;在调查利用算法使种族敏感环境中的人类决策自动化的利弊时,利用了每个专题领域查明、分析和辩论一个系统性问题;通过分析历史系统性模式、隐含偏见、现有的算法风险和法律影响,断言基于人工智能的自然语言处理,如风险评估工具,会产生种族差异的结果;结论是,需要采取更多诉讼政策,管制和限制内部政府机构和公司如何利用算法、隐私和安全风险以及审计要求,以便与过去的种族不公正结果和做法不同。