Prompt tuning recently becomes a hot-spot in the applications of large pretrained language models on specific downstream tasks. Regarding the Language Model as a Service (LMaaS), black-box tuning using derivative-free optimization (DFO) provides a novel approach to expand the practical scenarios of pretrained models and enrich the researches of few-shot learning. In this report, we present our solution in this competition that is based on the LMaaS scenario. Our solution consists of several modifications to BBTv2, including multiple label words, selection of P0, rolling update strategy, multi-task loss from MLP classifier, and finally using the ensemble method to further improve generalization ability. We also shared some strategies that we tried but didn't use in the final submission for further discussion. In the end we raised a question about the SNLI dataset and the impact on the results, as well as our concerns about the competition.
翻译:快速调试最近成为应用大型预先培训的语言模式处理具体下游任务的一个热点。关于语言模式作为一个服务(LMaaS),使用无衍生物优化(DFO)的黑盒调试提供了一种新颖的方法,以扩大预先培训模式的实际情景,丰富了少数学识的研究。在本报告中,我们介绍了基于LMaaS假设的这一竞争的解决方案。我们的解决方案包括几处对BBTv2的修改,包括多个标签词、P0的选用、滚动更新战略、MLP分类的多任务损失,以及最终使用混合方法进一步提高通用能力。我们还分享了一些我们尝试但在最后提交供进一步讨论时没有使用的战略。最后,我们提出了一个关于SNLI数据集和对结果的影响的问题,以及我们对竞争的关切。