Static subword tokenization algorithms have been an essential component of recent works on language modeling. However, their static nature results in important flaws that degrade the models' downstream performance and robustness. In this work, we propose MANTa, a Module for Adaptive Neural TokenizAtion. MANTa is a differentiable tokenizer trained end-to-end with the language model. The resulting system offers a trade-off between the expressiveness of byte-level models and the speed of models trained using subword tokenization. In addition, our tokenizer is highly explainable since it produces an explicit segmentation of sequences into blocks. We evaluate our pre-trained model on several English datasets from different domains as well as on synthetic noise. We find that MANTa improves robustness to character perturbations and out-of-domain data. We then show that MANTa performs comparably to other models on the general-domain GLUE benchmark. Finally, we show that it is considerably faster than strictly byte-level models.
翻译:静态化算法是最近语言模型工作的基本组成部分。 但是, 静态性质导致重要缺陷, 降低了模型下游的性能和稳健性。 在这项工作中, 我们提出“ 曼塔 ” ( MANTA ), 是一个适应性神经感应器模块 。 曼塔 是语言模型中经过培训的顶端对端不同象征器 。 由此形成的系统在字级模型的表达性与使用子词符号化培训的模型的速度之间提供了一种权衡。 此外, 我们的代谢器是高度可解释的, 因为它生成了块状序列的清晰分解 。 我们评估了不同领域和合成噪音的若干英国数据集的预培训模型 。 我们发现, 曼塔 提高了外端数据特性的稳健性。 我们然后显示, 曼塔 与一般域 GLUE 基准上的其他模型相比, 具有可比较性 。 最后, 我们显示它比严格意义上的字级模型要快得多 。