"AI as a Service" (AIaaS) is a rapidly growing market, offering various plug-and-play AI services and tools. AIaaS enables its customers (users) - who may lack the expertise, data, and/or resources to develop their own systems - to easily build and integrate AI capabilities into their applications. Yet, it is known that AI systems can encapsulate biases and inequalities that can have societal impact. This paper argues that the context-sensitive nature of fairness is often incompatible with AIaaS' 'one-size-fits-all' approach, leading to issues and tensions. Specifically, we review and systematise the AIaaS space by proposing a taxonomy of AI services based on the levels of autonomy afforded to the user. We then critically examine the different categories of AIaaS, outlining how these services can lead to biases or be otherwise harmful in the context of end-user applications. In doing so, we seek to draw research attention to the challenges of this emerging area.
翻译:“AI as a service”(AIaaS)是一个迅速增长的市场,提供各种插头和游戏的AI服务和工具。AIaAS使客户(用户)――他们可能缺乏专门知识、数据和/或资源来发展自己的系统――能够轻松地建立AI能力并将其融入其应用中。然而,众所周知,AI系统可以包罗可能产生社会影响的偏见和不平等。本文认为,对背景敏感的公平性质往往与AIaAS'一刀切的“一刀切”方法不相容,导致问题和紧张。具体地说,我们审查和系统化AIaS空间,根据用户的自主程度提出AI服务的分类。然后,我们严格地审查AIAAS的不同类别,概述这些服务在终端用户应用中如何导致偏见或有害。我们这样做是为了提请研究注意这个新兴领域的挑战。