We present two novel unsupervised methods for eliminating toxicity in text. Our first method combines two recent ideas: (1) guidance of the generation process with small style-conditional language models and (2) use of paraphrasing models to perform style transfer. We use a well-performing paraphraser guided by style-trained language models to keep the text content and remove toxicity. Our second method uses BERT to replace toxic words with their non-offensive synonyms. We make the method more flexible by enabling BERT to replace mask tokens with a variable number of words. Finally, we present the first large-scale comparative study of style transfer models on the task of toxicity removal. We compare our models with a number of methods for style transfer. The models are evaluated in a reference-free way using a combination of unsupervised style transfer metrics. Both methods we suggest yield new SOTA results.
翻译:我们提出了两种在文本中消除毒性的新颖、不受监督的方法。我们的第一种方法是将两个最近的想法结合起来:(1) 以小风格有条件语言模型指导生成过程,(2) 使用抛光模型进行样式转让。我们用一种良好的外语模型,由经样式培训的语言模型指导,以保持文本内容并去除毒性。我们的第二种方法是用非防火性同义词取代有毒单词。我们让BERT能够用一个变数的单词取代掩码符号,从而使这种方法更加灵活。最后,我们介绍了关于毒性清除任务的对样式转移模型的第一次大规模比较研究。我们将模型与一些样式转让方法进行了比较。这些模型以不参考的方式评估,同时使用非监督风格转移指标。我们建议两种方法产生新的SOTA结果。