Nearly all jurisdictions in the United States require a professional license exam, commonly referred to as "the Bar Exam," as a precondition for law practice. To even sit for the exam, most jurisdictions require that an applicant completes at least seven years of post-secondary education, including three years at an accredited law school. In addition, most test-takers also undergo weeks to months of further, exam-specific preparation. Despite this significant investment of time and capital, approximately one in five test-takers still score under the rate required to pass the exam on their first try. In the face of a complex task that requires such depth of knowledge, what, then, should we expect of the state of the art in "AI?" In this research, we document our experimental evaluation of the performance of OpenAI's `text-davinci-003` model, often-referred to as GPT-3.5, on the multistate multiple choice (MBE) section of the exam. While we find no benefit in fine-tuning over GPT-3.5's zero-shot performance at the scale of our training data, we do find that hyperparameter optimization and prompt engineering positively impacted GPT-3.5's zero-shot performance. For best prompt and parameters, GPT-3.5 achieves a headline correct rate of 50.3% on a complete NCBE MBE practice exam, significantly in excess of the 25% baseline guessing rate, and performs at a passing rate for both Evidence and Torts. GPT-3.5's ranking of responses is also highly-correlated with correctness; its top two and top three choices are correct 71% and 88% of the time, respectively, indicating very strong non-entailment performance. While our ability to interpret these results is limited by nascent scientific understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future.
翻译:美国几乎所有管辖区都需要专业执照考试,通常称为“律师考试”,这是法律实践的一个先决条件。即使要参加考试,多数管辖区要求申请人至少完成七年的中学后教育,包括在经认可的法学院学习三年。此外,大多数考试接受者还要经过几周甚至几个月的更深入的考试准备。尽管在时间和资本方面投入了大量资金,但大约五分之一的考试接受者仍然得分,而这是第一次考试时通过考试所需的比率。面对需要如此深入的知识的复杂任务,那么,我们是否应该期待艺术在“AI”中的状况?在这个研究中,我们记录了我们对OpenAI的“Text-davinci-003”模型业绩的实验性评估,通常称之为GPT-3.5,在多州多选题(MBE)部分中,这些测试接受者仍然得分数比重。虽然我们发现微调GPT-3.5的低分数,但在培训数据规模上,我们确实发现超偏差的优化和快速工程优化度在GPT-3.5的成绩上都比高。