With adversarial or otherwise normal prompts, existing large language models (LLM) can be pushed to generate toxic discourses. One way to reduce the risk of LLMs generating undesired discourses is to alter the training of the LLM. This can be very restrictive due to demanding computation requirements. Other methods rely on rule-based or prompt-based token elimination, which are limited as they dismiss future tokens and the overall meaning of the complete discourse. Here, we center detoxification on the probability that the finished discourse is ultimately considered toxic. That is, at each point, we advise against token selections proportional to how likely a finished text from this point will be toxic. To this end, we formally extend the dead-end theory from the recent reinforcement learning (RL) literature to also cover uncertain outcomes. Our approach, called rectification, utilizes a separate but significantly smaller model for detoxification, which can be applied to diverse LLMs as long as they share the same vocabulary. Importantly, our method does not require access to the internal representations of the LLM, but only the token probability distribution at each decoding step. This is crucial as many LLMs today are hosted in servers and only accessible through APIs. When applied to various LLMs, including GPT-3, our approach significantly improves the generated discourse compared to the base LLMs and other techniques in terms of both the overall language and detoxification performance.
翻译:通过对抗性或正常的促动,现有的大型语言模型(LLM)可以被推向产生有毒的言论。降低LLM产生不受欢迎的话语的风险的方法之一是改变LLM的培训。由于计算要求要求很高,这可能会非常严格。其他方法依靠基于规则或基于迅速的象征性消除,这些方法因排除未来象征和整个话语的总体含义而受到限制。这里,我们集中解毒,看结束的谈话最终被认为有毒的可能性。在每一个点,我们建议不要作出与从这个点完成的文字有多大的可能性是有毒的相称的象征性选择。为此,我们正式将死端理论从最近的强化学习(RL)文献中扩大到包括不确定的结果。我们的方法,即纠正,使用单独但大大小得多的解毒模式,只要它们使用相同的词汇,就可以适用于不同的LMMs。我们的方法不需要进入LMM的内部陈述,但只要在每一个解译阶段,只有象征性的概率分布在每一个步骤上。这是至关重要的,因为许多LMMSMS在今天的服务器和整个LMMSMS中都能够大大改进。</s>