Large language models have shown promising results in zero-shot settings (Brown et al.,2020; Radford et al., 2019). For example, they can perform multiple choice tasks simply by conditioning on a question and selecting the answer with the highest probability. However, ranking by string probability can be problematic due to surface form competition-wherein different surface forms compete for probability mass, even if they represent the same underlying concept, e.g. "computer" and "PC." Since probability mass is finite, this lowers the probability of the correct answer, due to competition from other strings that are valid answers (but not one of the multiple choice options). We introduce Domain Conditional Pointwise Mutual Information, an alternative scoring function that directly compensates for surface form competition by simply reweighing each option according to a term that is proportional to its a priori likelihood within the context of the specific zero-shot task. It achieves consistent gains in zero-shot performance over both calibrated (Zhao et al., 2021) and uncalibrated scoring functions on all GPT-2 and GPT-3 models over a variety of multiple choice datasets.
翻译:大型语言模型在零弹射设置中显示了有希望的结果(Brown et al.,202020;Radford et al.,2019),例如,它们可以仅仅通过对一个问题进行调节并选择最高概率的答案来完成多重选择任务。然而,由于地表竞争形式,在不同的表面形式中,即使它们代表着相同的基本概念,例如“计算机”和“PC”,但根据“计算机”和“PC”等,按字符串排列的概率可能存在问题。由于概率质量是有限的,这降低了正确回答的概率,因为来自其他字符串的竞争是有效的答案(但不是多种选择选项之一)。 我们引入了“长期点对准”共同信息,这是一种替代评分功能,它直接补偿地表形式竞争,只是根据一个在具体零弹射任务中与其先前可能性成正比的术语,对每个选项进行重新调整。它在经过校准后(Zhao等人,2021)和未经校正的所有GPT-2和GPT-3模型的评分函数在多种选择数据集上取得一致的成绩。