The impact of Artificial Intelligence does not depend only on fundamental research and technological developments, but for a large part on how these systems are introduced into society and used in everyday situations. Even though AI is traditionally associated with rational decision making, understanding and shaping the societal impact of AI in all its facets requires a relational perspective. A rational approach to AI, where computational algorithms drive decision making independent of human intervention, insights and emotions, has shown to result in bias and exclusion, laying bare societal vulnerabilities and insecurities. A relational approach, that focus on the relational nature of things, is needed to deal with the ethical, legal, societal, cultural, and environmental implications of AI. A relational approach to AI recognises that objective and rational reasoning cannot does not always result in the 'right' way to proceed because what is 'right' depends on the dynamics of the situation in which the decision is taken, and that rather than solving ethical problems the focus of design and use of AI must be on asking the ethical question. In this position paper, I start with a general discussion of current conceptualisations of AI followed by an overview of existing approaches to governance and responsible development and use of AI. Then, I reflect over what should be the bases of a social paradigm for AI and how this should be embedded in relational, feminist and non-Western philosophies, in particular the Ubuntu philosophy.
翻译:人工智能的影响不仅取决于基本的研究和技术发展,而且在很大程度上取决于如何将这些系统引入社会并在日常生活中加以利用。即使大赦国际传统上与理性决策、理解和塑造大赦国际各方面的社会影响有关,但大赦国际的所有方面都要求从关系的角度来看待。 对大赦国际采取理性的方法,即计算算法驱动独立于人类干预、洞察力和情感的决策,其结果是产生偏见和排斥,暴露社会脆弱性和不安全感。在这一立场文件中,需要采取一种注重事物关系性质的关联方法,以处理大赦国际的伦理、法律、社会、文化和环境影响。 与大赦国际的关系方法承认,目标和理性推理并不总是能够产生`正确'的方式,因为`正确'取决于作出决定的形势的动态,而与解决伦理问题无关的是,设计和使用大赦国际的焦点必须放在提出伦理问题上。在本立场文件中,我首先对目前大赦国际的概念进行一般性讨论,然后应概述现有的治理和负责任发展方法,然后在伊斯兰科学院和伊斯兰科学院的基础中,我应如何反反反正地反映目前对宗教的哲学发展与负责任的发展模式。