Text-based safety classifiers are widely used for content moderation and increasingly to tune generative language model behavior - a topic of growing concern for the safety of digital assistants and chatbots. However, different policies require different classifiers, and safety policies themselves improve from iteration and adaptation. This paper introduces and evaluates methods for agile text classification, whereby classifiers are trained using small, targeted datasets that can be quickly developed for a particular policy. Experimenting with 7 datasets from three safety-related domains, comprising 15 annotation schemes, led to our key finding: prompt-tuning large language models, like PaLM 62B, with a labeled dataset of as few as 80 examples can achieve state-of-the-art performance. We argue that this enables a paradigm shift for text classification, especially for models supporting safer online discourse. Instead of collecting millions of examples to attempt to create universal safety classifiers over months or years, classifiers could be tuned using small datasets, created by individuals or small organizations, tailored for specific use cases, and iterated on and adapted in the time-span of a day.
翻译:以文字为基础的安全分类方法被广泛用于内容调适,并越来越多地用于调整归正语言模式行为----这是一个日益引起对数字助手和聊天员安全关注的主题。然而,不同的政策要求不同的分类者,安全政策本身则从迭代和适应中得到改进。本文介绍和评价了灵活文本分类方法,据此分类者得到培训,使用小的、有针对性的数据集,为特定政策快速开发。实验了由15个注解计划组成的三个安全相关领域的7个数据集,导致我们的关键发现:迅速调整大型语言模型,如PALM 62B, 标签为80个示例,能够达到最新业绩。我们说,这有利于文本分类模式的转变,特别是支持更安全的在线谈话模式。除了收集数以百万计的例子,试图在数月或数年内创建通用的安全分类者,还可以利用个人或小型数据集对分类加以调整,这些数据集是个人或小型组织为特定用途案例定制的,并在一天的时间内对分类加以调整和调整。