Past work in natural language processing interpretability focused mainly on popular classification tasks while largely overlooking generation settings, partly due to a lack of dedicated tools. In this work, we introduce Inseq, a Python library to democratize access to interpretability analyses of sequence generation models. Inseq enables intuitive and optimized extraction of models' internal information and feature importance scores for popular decoder-only and encoder-decoder Transformers architectures. We showcase its potential by adopting it to highlight gender biases in machine translation models and locate factual knowledge inside GPT-2. Thanks to its extensible interface supporting cutting-edge techniques such as contrastive feature attribution, Inseq can drive future advances in explainable natural language generation, centralizing good practices and enabling fair and reproducible model evaluations.
翻译:在这项工作中,我们引入了Inseq,这是一个皮松图书馆,使对序列生成模型的可解释性分析的获取民主化。Inseq能够直观和优化地提取模型的内部信息,并且为普通解码器和编码器解码器变形器结构提供重要分数。我们通过采用它来突出机器翻译模型中的性别偏见和在GPT-2中找到事实知识,展示了它的潜力。由于它拥有支持对比特征归属等尖端技术的可复制界面,Inseq能够推动在可解释的自然语言生成方面的未来进展,集中良好做法,并促成公平和可复制的模式评估。</s>