This paper proposes a framework for quantitatively evaluating interactive LLMs such as ChatGPT using publicly available data sets. We carry out an extensive technical evaluation of ChatGPT using 23 data sets covering 8 different common NLP application tasks. We evaluate the multitask, multilingual and multi-modal aspects of ChatGPT based on these data sets and a newly designed multimodal dataset. We find that ChatGPT outperforms LLMs with zero-shot learning on most tasks and even outperforms fine-tuned models on some tasks. We find that it is better at understanding non-Latin script languages than generating them. It is able to generate multimodal content from textual prompts, via an intermediate code generation step. Moreover, we find that ChatGPT is 63.41% accurate on average in 10 different reasoning categories under logical reasoning, non-textual reasoning, and commonsense reasoning, hence making it an unreliable reasoner. It is, for example, better at deductive than inductive reasoning. ChatGPT suffers from hallucination problems like other LLMs and it generates more extrinsic hallucinations from its parametric memory as it does not have access to an external knowledge base. Finally, the interactive feature of ChatGPT enables human collaboration with the underlying LLM to improve its performance, i.e, 8% ROUGE-1 on summarization and 2% ChrF++ on machine translation, in a multi-turn "prompt engineering" fashion. We also release codebase for evaluation set extraction.
翻译:本文提出了一个框架,用于利用公开可得的数据集对互动LLMs(如ChateGPT)等互动LLMs进行定量评估,例如利用公开可得的数据集进行大量技术评估。我们利用涵盖8项不同的NLP应用任务的23个数据集对ChatGPT进行广泛的技术评估。我们根据这些数据组和新设计的多式联运数据集,对ChatGPT的多任务、多语言和多模式方面进行了评估。我们发现,ChatGPT在大多数任务上学习零分数,甚至优于优于某些任务的微调模型。我们发现,它比其他LLMS更容易理解非拉丁文字语言而不是生成这些语言。它能够通过中间代码生成的文本提示生成多式内容。此外,我们发现,在逻辑推理、非文字推理和普通推理的10个不同的推理类别中,CGPTPT平均为63.41%。最后,我们更善于推理而不是感性推理,像其他LM这样的推理学基础,像其他LM和它从参数记忆记忆中产生更多的外判幻觉。它从参数记忆中生成,因为它不能使RPTBEBELM 和LM 基本的RBM 和LM 改进了对RBM 的外部读学基础的进度学 。</s>