Large language models (LLM) such as OpenAI's ChatGPT and GPT-3 offer unique testbeds for exploring the translation challenges of turning literacy into numeracy. Previous publicly-available transformer models from eighteen months prior and 1000 times smaller failed to provide basic arithmetic. The statistical analysis of four complex datasets described here combines arithmetic manipulations that cannot be memorized or encoded by simple rules. The work examines whether next-token prediction succeeds from sentence completion into the realm of actual numerical understanding. For example, the work highlights cases for descriptive statistics on in-memory datasets that the LLM initially loads from memory or generates randomly using python libraries. The resulting exploratory data analysis showcases the model's capabilities to group by or pivot categorical sums, infer feature importance, derive correlations, and predict unseen test cases using linear regression. To extend the model's testable range, the research deletes and appends random rows such that recall alone cannot explain emergent numeracy.
翻译:OpenAI的 ChattGPT 和 GPT-3 等大型语言模型(LLM) 为探索将读写能力转换为算术的翻译挑战提供了独特的测试床。 先前公开使用的18个月前18个月和1000倍小的变压器模型无法提供基本的算术。 这里描述的四个复杂的数据集的统计分析结合了无法被记忆或用简单规则编码的算术操作。 工作考察了下点预测是否成功, 从句完成到实际数字理解的领域。 例如, 工作突出介绍了LLM最初从记忆中装载或利用 Python 库随机生成的模拟数据集的描述性统计数据案例。 由此产生的探索性数据分析展示了模型按直截数字组合或直截数字的能力, 推出特性的重要性, 产生关联性, 并用线性回归来预测看不见的测试案例。 要扩展模型的测试范围, 研究删除并附加随机行, 仅能回忆的随机行无法解释紧急算术 。