Libraries are increasingly relying on computational methods, including methods from Artificial Intelligence (AI). This increasing usage raises concerns about the risks of AI that are currently broadly discussed in scientific literature, the media and law-making. In this article we investigate the risks surrounding bias and unfairness in AI usage in classification and automated text analysis within the context of library applications. We describe examples that show how the library community has been aware of such risks for a long time, and how it has developed and deployed countermeasures. We take a closer look at the notion of '(un)fairness' in relation to the notion of 'diversity', and we investigate a formalisation of diversity that models both inclusion and distribution. We argue that many of the unfairness problems of automated content analysis can also be regarded through the lens of diversity and the countermeasures taken to enhance diversity.
翻译:图书馆日益依赖计算方法,包括人工智能(AI)的方法。这种日益增加的使用引起了人们对目前科学文献、媒体和立法中广泛讨论的大赦国际风险的关切。在本条中,我们调查了在图书馆应用程序的分类和自动文本分析中,AI使用中的偏见和不公平的风险。我们描述的例子表明图书馆界长期以来如何意识到这种风险,以及它如何制定和采取对策。我们更仔细地研究“多样性”概念中的“不公平”概念,我们调查了多样性的正规化,这种多样性既模拟了包容性,也反映了传播。我们说,自动化内容分析的许多不公平问题也可以从多样性的角度和为加强多样性而采取的对策的角度来看待。</s>