Log statements have become an integral part of modern software systems. Prior research efforts have focused on supporting the decisions of placing log statements, such as where/what to log, while automated generation or completion of log statements has received little attention. With the increasing use of Large Language Models (LLMs) for code-related tasks such as code completion or generation, automated methods for generating or completing log statements have gained much momentum. Fine-tuning open-source LLMs like the Llama series is often preferred by enterprises over using commercial ones like the GPT series due to considerations including privacy, security, openness, performance, etc. Fine-tuning LLMs requires task-specific training data and custom-designed processing algorithms, which, however, have not been thoroughly explored for the log statement generation task. This paper fills this gap by contributing such a fine-tuning method LoGFiLM and an exemplar model by using the proposed method to fine-tune Llama-3-8B. Experiments with our own curated dataset and a public dataset show that LoGFiLM consistently outperforms the original Llama-3-8B and the commercial LLMs of GPT-3.5 and GPT-4. The results further reveal that fine-tuning Llama-3-8B with data encompassing broader contextual ranges surrounding log statements yields a better model for the automated generation of log statements.
翻译:暂无翻译