We present an empirical evaluation of various outputs generated by nine of the most widely-available large language models (LLMs). Our analysis is done with off-the-shelf, readily-available tools. We find a correlation between percentage of memorized text, percentage of unique text, and overall output quality, when measured with respect to output pathologies such as counterfactual and logically-flawed statements, and general failures like not staying on topic. Overall, 80.0% of the outputs evaluated contained memorized data, but outputs containing the most memorized content were also more likely to be considered of high quality. We discuss and evaluate mitigation strategies, showing that, in the models evaluated, the rate of memorized text being output is reduced. We conclude with a discussion on potential implications around what it means to learn, to memorize, and to evaluate quality text.
翻译:我们对九个最广泛可用的大型语言模型(LLMs)产生的各种输出进行了实证评估。我们使用现成的、易于获取的工具进行分析。当与与输出路径ologisms(如虚假的、逻辑上有缺陷的陈述)以及不持续关注主题等一般失败相比时,我们发现记忆文本的百分比、独特文本的百分比和整体输出质量之间存在相关性。总体而言,80.0%的评估输出包含了记忆数据,但输出中包含最多记忆内容的输出也更容易被认为是高质量的。我们讨论并评估缓解策略,表明在评估的模型中,减少输出中的记忆文本的速率。我们最后讨论了学习、记忆和评估质量文本的潜在影响。