The crystallization of modeling methods around the Transformer architecture has been a boon for practitioners. Simple, well-motivated architectural variations can transfer across tasks and scale, increasing the impact of modeling research. However, with the emergence of state-of-the-art 100B+ parameters models, large language models are increasingly expensive to accurately design and train. Notably, it can be difficult to evaluate how modeling decisions may impact emergent capabilities, given that these capabilities arise mainly from sheer scale alone. In the process of building BLOOM--the Big Science Large Open-science Open-access Multilingual language model--our goal is to identify an architecture and training setup that makes the best use of our 1,000,000 A100-GPU-hours budget. Specifically, we perform an ablation study at the billion-parameter scale comparing different modeling practices and their impact on zero-shot generalization. In addition, we study the impact of various popular pre-training corpora on zero-shot generalization. We also study the performance of a multilingual model and how it compares to the English-only one. Finally, we consider the scaling behaviour of Transformers to choose the target model size, shape, and training setup. All our models and code are open-sourced at https://huggingface.co/bigscience .
翻译:围绕变形器结构的建模方法的结晶化对实践者来说是一种好处。简单的、动机良好的建筑变异可以跨越任务和规模,增加建模研究的影响。然而,随着最先进的100B+参数模型的出现,大型语言模型越来越昂贵,准确设计和培训费用越来越高。值得注意的是,由于这些能力主要来自纯粹的规模,因此很难评估建模决定如何影响新兴能力。在建设BLOOM-大科学开放开放、开放和多语言开放模型-我们的过程中,我们的目标是确定一个建筑和训练设施,以最佳地利用我们1,000,000 A100-GPU-小时的预算。具体地说,我们在10亿参数尺度上进行降幅研究,比较不同的建模做法及其对零光一般化的影响。此外,我们研究各种流行的预培训子公司对零光谱化的影响。我们还研究多语言模型的性能和它与只使用英语的模型相比如何。最后,我们考虑的是所有变形模型的缩缩缩缩缩图行为和模型。