Large language models (LLMs) can acquire strong code-generation capabilities through few-shot learning. In contrast, supervised fine-tuning is still needed for smaller models to achieve good performance. Such fine-tuning demands a large number of task-specific NL-code pairs, which are expensive to obtain. In this paper, we attempt to transfer the code generation ability of an LLM to a smaller model with the aid of weakly-supervised data. More specifically, we propose explicit knowledge transfer (EKT), which uses the few-shot capabilities of a teacher LLM to create NL-code pairs that we then filter for correctness and fine-tune the student on. We evaluate EKT on the task of generating code solutions to math word problems from the GSM8k dataset. We find that EKT not only yields better performance than training with expert iteration, but also outperforms knowledge distillation, another form of knowledge transfer. A GPT-Neo 1.3B model trained using EKT with a GPT-J teacher achieves a 12.4% pass@100 on GSM8k, while the same student and teacher trained with knowledge distillation yield only a 3.7% pass@100. We also show that it is possible for a student model to outperform the teacher using EKT.
翻译:大型语言模型(LLMS) 可以通过微小的学习获得强大的代码生成能力。 相反, 小型模型仍然需要有监督的微调才能取得良好的业绩。 这种微调需要大量任务专用 NL 代码配对,这些配对非常昂贵。 在本文中, 我们试图将LLM 的代码生成能力转让给一个较小的模型, 借助弱小的监管数据。 更具体地说, 我们提议明确的知识传输( EKT ) 。 我们利用教师LMT的微调能力创建NL- code 配对, 然后我们过滤这些配对学生的正确性和精细调。 我们评估EKT 如何从 GSM8k 数据集中生成数学词汇问题的代码解决方案。 我们发现, EKT 不仅能比专家的迭代号培训更好, 而且还超越了知识蒸馏, 另一种形式的知识转让。 GPT- Neo 1. 3B 模型使用GPT 师资 培训的师资培养的NL- J 代码配有GSM8KT 的12. 100 passilling 师资, 同时我们用3.7 师的成绩展示了它。