As AI technologies are rolled out into healthcare, academia, human resources, law, and a multitude of other domains, they become de-facto arbiters of truth. But truth is highly contested, with many different definitions and approaches. This article discusses the struggle for truth in AI systems and the general responses to date. It then investigates the production of truth in InstructGPT, a large language model, highlighting how data harvesting, model architectures, and social feedback mechanisms weave together disparate understandings of veracity. It conceptualizes this performance as an operationalization of truth, where distinct, often conflicting claims are smoothly synthesized and confidently presented into truth-statements. We argue that these same logics and inconsistencies play out in Instruct's successor, ChatGPT, reiterating truth as a non-trivial problem. We suggest that enriching sociality and thickening "reality" are two promising vectors for enhancing the truth-evaluating capacities of future language models. We conclude, however, by stepping back to consider AI truth-telling as a social practice: what kind of "truth" do we as listeners desire?
翻译:随着大赦国际的技术被推广到医疗保健、学术界、人力资源、法律和其他许多领域,它们成为事实的真理仲裁者。但真相争议很大,有多种不同的定义和方法。这篇文章讨论了在大赦国际系统中寻求真相的斗争以及迄今为止的总体反应。然后,它用一个大语言模型,即一个大语言模型,即教育GPT来调查真相的产生,强调数据收集、模型架构和社会反馈机制如何将不同的真实性理解结合在一起。它把这一表现概念化为一种事实的操作化,其中独特、经常相互冲突的主张被顺利地合成并充满信心地呈现为真相陈述。我们争辩说,这些相同的逻辑和不一致在教官的继任者ChattGPT(ChattGPT)中扮演,重申真相是一个非三重问题。我们提出,丰富社会性和“真实性”是加强未来语言模型真相评价能力的两个有希望的载体。但我们得出结论,我们回到把“真相说明”视为一种社会实践:我们作为倾听者的愿望是什么?