One of the defining phenomena in this age is the widespread deployment of systems powered by artificial intelligence (AI) technology. With AI taking the center stage, many sections of society are being affected directly or indirectly by algorithmic decisions. Algorithmic decisions carry both economical and personal implications which have brought about the issues of fairness, accountability, transparency and ethics (FATE) in AI geared towards addressing algorithmic disparities. Ethical AI deals with incorporating moral behaviour to avoid encoding bias in AI's decisions. However, the present discourse on such critical issues is being shaped by the more economically developed countries (MEDC), which raises concerns regarding neglecting local knowledge, cultural pluralism and global fairness. This study builds upon existing research on responsible AI, with a focus on areas in the Global South considered to be under-served vis-a-vis AI. Our goal is two-fold (1) to assess FATE-related issues and the effectiveness of transparency methods and (2) to proffer useful insights and stimulate action towards bridging the accessibility and inclusivity gap in AI. Using ads data from online social networks, we designed a user study (n=43) to achieve the above goals. Among the findings from the study include: explanations about decisions reached by the AI systems tend to be vague and less informative. To bridge the accessibility and inclusivity gap, there is a need to engage with the community for the best way to integrate fairness, accountability, transparency and ethics in AI. This will help in empowering the affected community or individual to effectively probe and police the growing application of AI-powered systems.
翻译:这个时代的一个决定性现象是广泛部署由人工智能(AI)技术推动的系统。随着AI进入核心阶段,社会许多阶层正直接或间接地受到算法决定的影响。在人工智能中,分析决定既涉及经济和个人影响,也涉及经济和个人影响,导致公平、问责、透明度和道德问题,导致解决算法差异。道德大赦国际处理的是将道德行为纳入大赦国际决定,避免将偏见编码。然而,目前关于这类关键问题的讨论是由经济较发达的国家(MEDC)形成的,这引起了对忽视当地知识、文化多元性和全球公平的关切。这项研究以负责任的AI的现有研究为基础,重点是全球南部被认为与AI相比服务不足的领域。我们的目标是双重:(1) 评估与人工智能有关的问题和透明方法的有效性,(2) 提出有益的见解和激励行动,以弥补AI中的可获取性和包容性差距。我们设计了一项用户研究(n=43),以有效实现上述目标。这一研究以负责任的AI研究为基础,重点是全球南部被认为与AI相比服务不足的领域。我们的目标有两重:(1) 评估与公平性有关的问题以及透明度方法的有效性;(2) 利用在线社会网络的可获取性和包容性差距,我们设计了一项帮助用户研究(n=43)实现上述目标。