Artificial Intelligence (AI) is at the forefront of modern technology, and its effects are felt in many areas of society. To prevent algorithmic disparities, fairness, accountability, transparency, and ethics (FATE) in AI are being implemented. However, the current discourse on these issues is largely dominated by more economically developed countries (MEDC), leaving out local knowledge, cultural pluralism, and global fairness. This study aims to address this gap by examining FATE-related desiderata, particularly transparency and ethics, in areas of the global South that are underserved by AI. A user study (n=43) and a participatory session (n=30) were conducted to achieve this goal. The results showed that AI models can encode bias and amplify stereotypes. To promote inclusivity, a community-led strategy is proposed to collect and curate representative data for responsible AI design. This will enable the affected community or individuals to monitor the increasing use of AI-powered systems. Additionally, recommendations based on public input are provided to ensure that AI adheres to social values and context-specific FATE needs.
翻译:暂无翻译