We present a smoothly broken power law functional form (referred to by us as a Broken Neural Scaling Law (BNSL)) that accurately models and extrapolates the scaling behaviors of deep neural networks (i.e. how the evaluation metric of interest varies as the amount of compute used for training, number of model parameters, training dataset size, model input size, number of training steps, or upstream performance varies) for various architectures and for each of various tasks within a large and diverse set of upstream and downstream tasks, in zero-shot, prompted, and fine-tuned settings. This set includes large-scale vision, language, audio, video, diffusion, generative modeling, multimodal learning, contrastive learning, AI alignment, robotics, out-of-distribution (OOD) generalization, continual learning, transfer learning, uncertainty estimation / calibration, out-of-distribution detection, adversarial robustness, distillation, sparsity, retrieval, quantization, pruning, molecules, computer programming/coding, math word problems, "emergent" "phase transitions / changes", arithmetic, unsupervised/self-supervised learning, and reinforcement learning (single agent and multi-agent). When compared to other functional forms for neural scaling behavior, this functional form yields extrapolations of scaling behavior that are considerably more accurate on this set. Moreover, this functional form accurately models and extrapolates scaling behavior that other functional forms are incapable of expressing such as the non-monotonic transitions present in the scaling behavior of phenomena such as double descent and the delayed, sharp inflection points present in the scaling behavior of tasks such as arithmetic. Lastly, we use this functional form to glean insights about the limit of the predictability of scaling behavior. Code is available at https://github.com/ethancaballero/broken_neural_scaling_laws
翻译:我们提出了一种平滑的幂律函数形式,称之为“破碎的神经规模定律(Broken Neural Scaling Law,BNSL)”,能够准确地建模和外推深度神经网络的扩展行为(即感兴趣的评估指标随着用于训练的计算量、模型参数数量、训练数据集大小、模型输入大小、训练步骤数量或上游性能等的变化情况)。此外,该方法针对各种任务中的各种体系结构及其零-shot,提示和微调设置,包括大规模视觉、语言、音频、视频、扩散、生成建模、多模式学习、对比学习、AI对齐、机器人、超出分布(OOD)全面性、持续学习、迁移学习、不确定性估计/校准、超出分布检测、对抗鲁棒性、蒸馏、稀疏性、检索、量化、修剪、分子、计算机编程/编码、数学问题、" emergent " " 相变/变化",算术,无监督/自监督学习以及强化学习(单智能体和多智能体)。与其他神经网络规模行为函数形式相比,该函数形式在此数据集的规模行为外推中提供了显著更准确的外推结果。此外,这种功能形式准确地模拟和外推其他功能形式无法表达的规模行为,例如双峰现象中存在的非单调转变和算术任务的延迟、尖锐的拐点。最后,我们利用这种功能形式来揭示规模行为可预测性的极限。代码可在 https://github.com/ethancaballero/broken_neural_scaling_laws 获取。