In-context learning of GPT-like models has been recognized as fragile across different hand-crafted templates, and demonstration permutations. In this work, we propose prototypical calibration to adaptively learn a more robust decision boundary for zero- and few-shot classification, instead of greedy decoding. Concretely, our method first adopts Gaussian mixture distribution to estimate the prototypical clusters for all categories. Then we assign each cluster to the corresponding label by solving a weighted bipartite matching problem. Given an example, its prediction is calibrated by the likelihood of prototypical clusters. Experimental results show that prototypical calibration yields a 15% absolute improvement on a diverse set of tasks. Extensive analysis across different scales also indicates that our method calibrates the decision boundary as expected, greatly improving the robustness of GPT to templates, permutations, and class imbalance.
翻译:GPT类似模型的内文字学习被公认为在不同手制模板和演示排列中都很脆弱。 在这项工作中,我们提出原型校准,以适应性地学习更稳健的零和几分分类决定界限,而不是贪婪的解码。具体地说,我们的方法首先采用高斯混合分布法来估计所有类别的原型组群。然后我们通过解决加权双方匹配问题,将每个组群分配到相应的标签上。举例来说,它的预测是由原型组群的可能性校准的。实验结果显示,原型校准使一套不同任务的绝对改进率达到15%。不同尺度的广泛分析还表明,我们的方法也按照预期对决定界限进行了广泛的校准,大大改进了GPT对模板的稳健性、调和阶级不平衡性。