On October 14th, 2020, researchers from OpenAI, the Stanford Institute for Human-Centered Artificial Intelligence, and other universities convened to discuss open research questions surrounding GPT-3, the largest publicly-disclosed dense language model at the time. The meeting took place under Chatham House Rules. Discussants came from a variety of research backgrounds including computer science, linguistics, philosophy, political science, communications, cyber policy, and more. Broadly, the discussion centered around two main questions: 1) What are the technical capabilities and limitations of large language models? 2) What are the societal effects of widespread use of large language models? Here, we provide a detailed summary of the discussion organized by the two themes above.
翻译:2020年10月14日,来自OpenAI的研究人员、斯坦福人文与人为中心的人工智能研究所和其他大学的研究人员聚集一堂,讨论围绕GPT-3的公开研究问题,GPT-3是当时最大的公开披露的密集语言模型。会议根据查塔姆大厦规则举行。讨论者来自各种研究背景,包括计算机科学、语言学、哲学、政治科学、通信、网络政策等等。讨论围绕两个主要问题:(1)大型语言模型的技术能力和局限性是什么?(2)广泛使用大型语言模型的社会影响是什么?在这里,我们详细总结了上述两个主题组织的讨论。