Assessments of algorithmic bias in large language models (LLMs) are generally catered to uncovering systemic discrimination based on protected characteristics such as sex and ethnicity. However, there are over 180 documented cognitive biases that pervade human reasoning and decision making that are routinely ignored when discussing the ethical complexities of AI. We demonstrate the presence of these cognitive biases in LLMs and discuss the implications of using biased reasoning under the guise of expertise. Rapid adoption of LLMs has brought about a technological shift in which these biased outputs are pervading more sectors than ever before. We call for stronger education, risk management, and continued research as widespread adoption of this technology increases.
翻译:对大型语言模型(LLMs)的算法偏见评估通常旨在揭示基于受保护特征(如性别和种族)的系统性歧视。然而,有超过180个记录在案的认知偏见贯穿于人类推理和决策之中,在讨论AI的伦理复杂性时常常被忽略。我们展示了LLMs中存在这些认知偏见并讨论了在专业知识的幌子下使用有偏见的推理的影响。LLMs的快速采用带来了技术转变,其中有偏见的输出正在渗透更多的部门。我们呼吁进行更强的教育、风险管理和持续的研究,随着这项技术的广泛采用增加。