Current language models achieve low perplexity but their resulting generations still suffer from toxic responses, repetitiveness and contradictions. The standard language modeling setup fails to address these issues. In this paper, we introduce a new architecture, {\sc Director}, that consists of a unified generator-classifier with both a language modeling and a classification head for each output token. Training is conducted jointly using both standard language modeling data, and data labeled with desirable and undesirable sequences. Experiments in several settings show that the model has competitive training and decoding speed compared to standard language models while yielding superior results, alleviating known issues while maintaining generation quality. It also outperforms existing model guiding approaches in terms of both accuracy and efficiency.
翻译:目前的语言模式具有低度的复杂性,但其代代相传仍然受到有毒反应、重复和矛盾的影响。标准语言模式设置未能解决这些问题。在本文件中,我们引入了一个新的架构,即 ~Sc Director},由统一的发电机分类器组成,每个输出符号都有一个语言模型和一个分类标题。培训同时使用标准语言模型数据,以及标有可取和不可取顺序的数据进行。在几个环境中进行的实验表明,该模式与标准语言模式相比,具有竞争性培训和解码速度,同时产生优异的结果,缓解已知问题,同时保持生成质量。它还在准确性和效率方面超越了现有的模式指导方法。