Artificial intelligence (AI) systems attempt to imitate human behavior. How well they do this imitation is often used to assess their utility and to attribute human-like (or artificial) intelligence to them. However, most work on AI refers to and relies on human intelligence without accounting for the fact that human behavior is inherently shaped by the cultural contexts they are embedded in, the values and beliefs they hold, and the social practices they follow. Additionally, since AI technologies are mostly conceived and developed in just a handful of countries, they embed the cultural values and practices of these countries. Similarly, the data that is used to train the models also fails to equitably represent global cultural diversity. Problems therefore arise when these technologies interact with globally diverse societies and cultures, with different values and interpretive practices. In this position paper, we describe a set of cultural dependencies and incongruencies in the context of AI-based language and vision technologies, and reflect on the possibilities of and potential strategies towards addressing these incongruencies.
翻译:人工智能(AI)系统试图模仿人类行为。它们如何模仿人类行为?它们经常使用这种仿真技术来评估它们的效用,并赋予它们像人类(或人工)一样的智慧。然而,关于AI的大多数工作都提到和依赖人类智慧,而没有考虑到人类行为本身是由它们所固有的文化背景、它们所持有的价值观和信仰以及它们所遵循的社会做法所决定的。此外,由于AI技术大多在少数国家中构思和开发,它们包含了这些国家的文化价值观和做法。同样,用于培训模型的数据也未能公平地代表全球文化多样性。因此,当这些技术与具有不同价值观和解释实践的全球多样性社会和文化相互作用时,就会出现问题。在本立场文件中,我们描述了一套基于AI的语言和视觉技术的文化依赖性和差异,并思考解决这些差异的可能性和潜在战略。