Two-level fractional factorial designs permit the study multiple factors using a limited number of runs. Traditionally, these designs are obtained from catalogs available in standard textbooks or statistical software. However, modern Large Language Models (LLMs) can now produce two-level fractional factorial designs, but the quality of these designs has not been previously assessed. In this paper, we perform a systematic evaluation of two popular classes of LLMs, namely GPT and Gemini models, to construct two-level fractional factorial designs with 8, 16, and 32 runs, and 4 to 26 factors. To this end, we use prompting techniques to develop a high-quality set of design construction tasks for the LLMs. We compare the designs obtained by the LLMs with the best-known designs in terms of resolution and minimum aberration criteria. We show that the LLMs can effectively construct optimal 8-, 16-, and 32-run designs with up to eight factors.
翻译:二水平部分因子设计允许使用有限的试验次数研究多个因素。传统上,这些设计可从标准教科书或统计软件中提供的目录获得。然而,现代大型语言模型(LLMs)现在能够生成二水平部分因子设计,但这些设计的质量此前尚未得到评估。本文对两类流行的大型语言模型(即GPT和Gemini模型)进行了系统评估,以构建具有8、16和32次试验,以及4至26个因子的二水平部分因子设计。为此,我们使用提示技术为LLMs开发了一套高质量的设计构建任务。我们从分辨率和最小混杂准则的角度,将LLMs获得的设计与已知最优设计进行了比较。研究表明,LLMs能够有效地构建最多包含八个因子的最优8次、16次和32次试验设计。