When performing classification tasks with language models, would you prefer having only one highly accurate class or having every class deliver reliable performance? Obviously, a more balanced accuracy among classes better reflects the expectations of the majority of users. Especially for large language models (LLMs), the fact that they achieve a fair overall accuracy by in-context learning (ICL) obscures a large difference in individual class accuracies. In this work, we uncover and tackle language models' imbalance in per-class prediction accuracy by reconceptualizing it as the Contextual Oddity Bias (COBias), and we are the first to engage nonlinear integer programming (NIP) to debias it. Briefly, the proposed COBias metric measures accuracy differences among class pairs, with which we reveal the large per-class accuracy differences exhibited in LLMs of varied scales and families. Then we propose Debiasing as Nonlinear Integer Programming (DNIP) to correct ICL per-class probabilities towards lower COBias and higher overall accuracy. Our optimization objective is directly based on the evaluation scores by COBias and accuracy metrics, which is non-differentiable and solved by the simulated annealing metaheuristic. Evaluations on three LLMs across seven NLP classification tasks show that DNIP simultaneously achieves significant COBias reduction (-27%) and accuracy improvement (+12%) over the conventional ICL approach, suggesting that modeling pairwise class accuracy differences is a direction in pushing forward more accurate, more reliable LLM predictions.
翻译:暂无翻译