The advent of large language models (LLMs), such as GPT-4, has enabled significant advancements in generating code across various domains. However, these models face unique challenges when generating IEC 61131-3 Structured Text (ST) code due to limited data in public training datasets and the complexity of ST language syntax. This paper proposes a novel approach to training LLMs that emphasizes improving the quality of learning data through an online process involving compiler feedback and evaluation from a secondary LLM. In this framework, the primary LLM generates new training samples, which are subsequently evaluated by a compiler for syntactical correctness and by a specialized LLM that excels at assessing semantic accuracy, though it is not optimized for code generation itself. Through iterative refinement of the training data, this approach results in marked improvements for the trained LLM, leading to higher compilation success rates and better semantic precision. As a result, the framework proves highly suitable for industrial automation applications and outperforms state-of-the-art models.
翻译:暂无翻译