Large Language Models (LLMs) have made significant progress in reasoning, demonstrating their capability to generate human-like responses. This study analyzes the problem-solving capabilities of LLMs in the domain of thermodynamics. A benchmark of 22 thermodynamic problems to evaluate LLMs is presented that contains both simple and advanced problems. Five different LLMs are assessed: GPT-3.5, GPT-4, and GPT-4o from OpenAI, Llama 3.1 from Meta, and le Chat from MistralAI. The answers of these LLMs were evaluated by trained human experts, following a methodology akin to the grading of academic exam responses. The scores and the consistency of the answers are discussed, together with the analytical skills of the LLMs. Both strengths and weaknesses of the LLMs become evident. They generally yield good results for the simple problems, but also limitations become clear: The LLMs do not provide consistent results, they often fail to fully comprehend the context and make wrong assumptions. Given the complexity and domain-specific nature of the problems, the statistical language modeling approach of the LLMs struggles with the accurate interpretation and the required reasoning. The present results highlight the need for more systematic integration of thermodynamic knowledge with LLMs, for example, by using knowledge-based methods.
翻译:暂无翻译