We study the compute-optimal trade-off between model and training data set sizes for large neural networks. Our result suggests a linear relation similar to that supported by the empirical analysis of Chinchilla. While that work studies transformer-based large language models trained on the MassiveText corpus (gopher), as a starting point for development of a mathematical theory, we focus on a simpler learning model and data generating process, each based on a neural network with a sigmoidal output unit and single hidden layer of ReLU activation units. We establish an upper bound on the minimal information-theoretically achievable expected error as a function of model and data set sizes. We then derive allocations of computation that minimize this bound. We present empirical results which suggest that this approximation correctly identifies an asymptotic linear compute-optimal scaling. This approximation can also generate new insights. Among other things, it suggests that, as the input space dimension or latent space complexity grows, as might be the case for example if a longer history of tokens is taken as input to a language model, a larger fraction of the compute budget should be allocated to growing the learning model rather than training data set.
翻译:我们研究大型神经网络模型和培训数据集大小之间的计算最佳权衡。 我们的结果表明, 与Chinchilla的经验分析所支持的类似线性关系。 这项工作研究以变压器为基础的大型语言模型, 在MassiveTextposition(Gopher)上受过培训, 作为发展数学理论的起点, 我们侧重于一个更简单的学习模型和数据生成过程, 每一个模型和数据生成过程都基于一个神经网络, 包含一个像形输出单位和RELU 激活单位的单个隐藏层。 我们根据模型和数据集大小的函数, 对最小的信息- 理论可实现的预期错误设定一个上层界限。 然后我们得出计算方法的分配, 以最小化这一约束。 我们提出实验结果, 表明这种近似正确地点可以正确辨别出一个无符号的线性线性公式和最佳缩放比例。 这种近似也可以产生新的洞察力。 除其他外, 它表明, 随着输入空间的维度或潜在的空间复杂性增加, 举例来说, 如果将更长的象征历史作为语言模型的输入, 则应该分配一个更大的比例的计算预算用于发展模型, 而不是训练。