Artificial Intelligence (AI) is becoming the corner stone of many systems used in our daily lives such as autonomous vehicles, healthcare systems, and unmanned aircraft systems. Machine Learning is a field of AI that enables systems to learn from data and make decisions on new data based on models to achieve a given goal. The stochastic nature of AI models makes verification and validation tasks challenging. Moreover, there are intrinsic biaises in AI models such as reproductibility bias, selection bias (e.g., races, genders, color), and reporting bias (i.e., results that do not reflect the reality). Increasingly, there is also a particular attention to the ethical, legal, and societal impacts of AI. AI systems are difficult to audit and certify because of their black-box nature. They also appear to be vulnerable to threats; AI systems can misbehave when untrusted data are given, making them insecure and unsafe. Governments, national and international organizations have proposed several principles to overcome these challenges but their applications in practice are limited and there are different interpretations in the principles that can bias implementations. In this paper, we examine trust in the context of AI-based systems to understand what it means for an AI system to be trustworthy and identify actions that need to be undertaken to ensure that AI systems are trustworthy. To achieve this goal, we first review existing approaches proposed for ensuring the trustworthiness of AI systems, in order to identify potential conceptual gaps in understanding what trustworthy AI is. Then, we suggest a trust (resp. zero-trust) model for AI and suggest a set of properties that should be satisfied to ensure the trustworthiness of AI systems.
翻译:人工智能(AI)正在成为我们日常生活中许多系统,如自主车辆、保健系统和无人驾驶航空器系统,的角落。机器学习是AI的一个领域,它使各个系统能够从数据中学习,并根据实现特定目标的模式就新数据作出决定。AI模型的随机性使得核查和验证任务具有挑战性。此外,AI模型中存在内在的两层,如可复制性偏差、选择偏差(例如种族、性别、肤色)和报告偏差(即结果不反映现实)等。 越来越多的是,还特别关注AI的道德、法律和社会影响。AI系统很难审计和核证,因为它们具有黑箱性质。它们似乎也容易受到威胁;AI系统在提供不可靠的数据时可能存在错误。各国政府、国家和国际组织提出了克服这些挑战的若干原则,但在实践中应用有限,对原则的解释也各不相同(即不反映现实)。在本文中,我们检查对AI系统道德、法律和社会影响的特殊性的影响。在透明性方面,我们研究信任在AI系统背景下,需要有一个可靠的方法来理解。