As the performance of large language models rapidly improves, benchmarks are getting larger and more complex as well. We present LMentry, a benchmark that avoids this "arms race" by focusing on a compact set of tasks that are trivial to humans, e.g. writing a sentence containing a specific word, identifying which words in a list belong to a specific category, or choosing which of two words is longer. LMentry is specifically designed to provide quick and interpretable insights into the capabilities and robustness of large language models. Our experiments reveal a wide variety of failure cases that, while immediately obvious to humans, pose a considerable challenge for large language models, including OpenAI's latest 175B-parameter instruction-tuned model, TextDavinci002. LMentry complements contemporary evaluation approaches of large language models, providing a quick, automatic, and easy-to-run "unit test", without resorting to large benchmark suites of complex tasks.
翻译:随着大型语言模型的绩效迅速改善,基准也变得越来越广泛和复杂。我们介绍了LMentry,这是一个避免“武器竞赛”的基准,它侧重于一套对人类来说无关紧要的任务,例如写一个含有一个具体词的句子,确定一个单词中哪个词属于某个特定类别,或选择两个词中的哪一个较长。LMentry是专门设计来快速和解释大语言模型的能力和强健性的。我们的实验揭示了各种各样的失败案例,这些案例虽然对人类来说直接显而易见,但对大型语言模型构成相当大的挑战,包括OpenAI最新的175B参数指示调整模型TextDavinci002.Lmentry补充了大语言模型的当代评价方法,提供了快速、自动和容易操作的“单词测试 ”,而没有诉诸大型复杂任务基准套。