Generative language models define distributions over sequences of tokens that can represent essentially any combination of data modalities (e.g., any permutation of image tokens from VQ-VAEs, speech tokens from HuBERT, BPE tokens for language or code, and so on). To better understand the scaling properties of such mixed-modal models, we conducted over 250 experiments using seven different modalities and model sizes ranging from 8 million to 30 billion, trained on 5-100 billion tokens. We report new mixed-modal scaling laws that unify the contributions of individual modalities and the interactions between them. Specifically, we explicitly model the optimal synergy and competition due to data and model size as an additive term to previous uni-modal scaling laws. We also find four empirical phenomena observed during the training, such as emergent coordinate-ascent style training that naturally alternates between modalities, guidelines for selecting critical hyper-parameters, and connections between mixed-modal competition and training stability. Finally, we test our scaling law by training a 30B speech-text model, which significantly outperforms the corresponding unimodal models. Overall, our research provides valuable insights into the design and training of mixed-modal generative models, an important new class of unified models that have unique distributional properties.
翻译:为了更好地了解这种混合模式的大小特性,我们用7种不同的模式和规模进行了250多次实验,范围从800万至300亿不等,经过50亿至1 000亿个符号的培训。我们报告了新的混合模式缩放法,将单个模式的贡献和它们之间的互动结合起来。具体地说,我们明确将数据和模型大小导致的最佳协同效应和竞争模式作为以往单式缩放法的添加词。我们还发现在培训期间观察到的四种经验现象,例如在模式之间自然交替的新兴协调型式培训、选择关键超标的指导方针、混合模式竞争和培训稳定性之间的联系。最后,我们通过培训一个30B语言文本模型来测试我们的缩放法,这大大超出了相应的单式模型。总体而言,我们的研究提供了一种宝贵的、具有独特价值的基因分布模型设计,这种模型具有独特的、重要的基因分布模型。