Prompt-tuning has shown appealing performance in few-shot classification by virtue of its capability in effectively exploiting pre-trained knowledge. This motivates us to check the hypothesis that prompt-tuning is also a promising choice for long-tailed classification, since the tail classes are intuitively few-shot ones. To achieve this aim, we conduct empirical studies to examine the hypothesis. The results demonstrate that prompt-tuning exactly makes pre-trained language models at least good long-tailed learners. For intuitions on why prompt-tuning can achieve good performance in long-tailed classification, we carry out an in-depth analysis by progressively bridging the gap between prompt-tuning and commonly used fine-tuning. The summary is that the classifier structure and parameterization form the key to making good long-tailed learners, in comparison with the less important input structure. Finally, we verify the applicability of our finding to few-shot classification.
翻译:快速调试根据其有效开发培训前知识的能力,在微小的分类中显示了有吸引力的业绩。这促使我们检查一个假设,即即快速调试对于长尾类的分类也是很有希望的选择,因为尾类是直观的微小的分类。为了实现这一目标,我们进行了实证研究来研究这一假设。结果显示,快速调试确切地使经过培训的语文模型至少具有良好的长尾学习者。关于为什么快速调试能够实现长尾分类的良好绩效的直觉,我们通过逐步弥合快速调试和常用微调之间的差距,进行了深入的分析。摘要表明,与不太重要的投入结构相比,分类结构和参数化是培养良好的长尾学习者的关键。最后,我们核查了我们调查结果对少数简单分类的适用性。