Modern language models mostly take sub-words as input, a design that balances the trade-off between vocabulary size, number of parameters, and performance. However, sub-word tokenization still has disadvantages like not being robust to noise and difficult to generalize to new languages. Also, the current trend of scaling up models reveals that larger models require larger embeddings but that makes parallelization hard. Previous work on image classification proves splitting raw input into a sequence of chucks is a strong, model-agnostic inductive bias. Based on this observation, we rethink the existing character-aware method that takes character-level inputs but makes word-level sequence modeling and prediction. We overhaul this method by introducing a cross-attention network that builds word-level representation directly from bytes, and a sub-word level prediction based on word-level hidden states to avoid the time and space requirement of word-level prediction. With these two improvements combined, we have a token free model with slim input embeddings for downstream tasks. We name our method Byte2Word and perform evaluations on language modeling and text classification. Experiments show that Byte2Word is on par with the strong sub-word baseline BERT but only takes up 10\% of embedding size. We further test our method on synthetic noise and cross-lingual transfer and find it competitive to baseline methods on both settings.
翻译:现代语言模型大多采用子词作为投入,这种设计平衡了词汇大小、参数数量和性能之间的权衡。然而,子词符号化仍然有缺点,比如对噪音不够强大,难以推广到新语言。此外,目前扩大模型的趋势表明,更大的模型需要更大的嵌入,但这就使平行化变得非常困难。以往的图像分类工作证明原始输入分解成一个黑桶序列是一个强大的、模型和不为人知的感性偏差。基于这一观察,我们重新思考现有字符识别方法,该方法采用字符级投入,但进行字级序列建模和预测。我们通过引入一个跨目的网络,直接从字节建立字级代表,并根据字级隐藏状态进行子字级预测,以避免对字级预测的时间和空间要求。由于这两项改进,我们有一个象征性的、有微小的输入嵌入式缩入式模型。我们命名了“B2W2”方法,对语言模型和文字级序列进行建模和文本分类。我们通过“B2”模型和“B2”系统测试方法的分级,在“B”基准和“B”基准和“B”方法上,实验显示“B”方法的分级和“B”的分级标准转换。