Large language models (LLMs) such as DeepSeek-R1 have achieved remarkable performance across diverse reasoning tasks. To uncover the principles that govern their behaviour, we introduce the Electronic Circuit Principles (ECP), which maps inference-time learning (ITL) onto a semantic electromotive force and inference-time reasoning (ITR) onto a resistive network governed by Ohm's and Faraday's laws. This circuit-based modelling yields closed-form predictions of task performance and reveals how modular prompt components interact to shape accuracy. We validated ECP on 70,000 samples spanning 350 reasoning tasks and 9 advanced LLMs, observing a about 60% improvement in Pearson correlation relative to the conventional inference-time scaling law. Moreover, ECP explains the efficacy of 15 established prompting strategies and directs the development of new modular interventions that exceed the median score of the top 80% of participants in both the International Olympiad in Informatics and the International Mathematical Olympiad. By grounding LLM reasoning in electronic-circuit principles, ECP provides a rigorous framework for predicting performance and optimising modular components.
翻译:诸如DeepSeek-R1之类的大型语言模型(LLMs)在多样化推理任务中取得了显著性能。为揭示支配其行为的原理,我们引入了电子电路原理(ECP),该原理将推理时学习(ITL)映射为一种语义电动势,并将推理时推理(ITR)映射为由欧姆定律和法拉第定律支配的电阻网络。这种基于电路的建模产生了任务性能的闭式预测,并揭示了模块化提示组件如何相互作用以塑造准确性。我们在涵盖350个推理任务和9个先进LLMs的70,000个样本上验证了ECP,观察到相对于传统的推理时缩放定律,皮尔逊相关系数提升了约60%。此外,ECP解释了15种已确立的提示策略的有效性,并指导了新的模块化干预措施的开发,这些措施在国际信息学奥林匹克竞赛和国际数学奥林匹克竞赛中均超过了前80%参与者的中位数分数。通过将LLM推理建立在电子电路原理之上,ECP为预测性能和优化模块化组件提供了一个严谨的框架。