Derivative-free prompt learning has emerged as a lightweight alternative to prompt tuning, which only requires model inference to optimize the prompts. However, existing work did not take full advantage of the over-parameterized characteristics of large pre-trained language models (PLMs). In this paper, we propose Clip-Tuning, a simple yet effective method that adopts diverse frozen "thinned" networks of PLMs to obtain a mixture of rewards and thus advance the derivative-free prompt learning. The thinned networks consist of all the hidden units that survive a stationary dropout strategy, whose inference predictions reflect an ensemble of partial views over prompted training samples. Our method outperforms previous gradient-free prompt learning methods and achieves parity with gradient-based counterparts on seven language understanding benchmarks under few-shot settings.
翻译:无衍生的迅速学习已成为快速调试的一种轻量量的替代方法,而快速调试只是需要模型推导来优化速率,然而,现有工作并未充分利用受过训练的大型语言模式(PLMs)的超度参数性能。在本文中,我们提议了Clip-Turning,这是一个简单而有效的方法,采用各种冻结的PLMs“触发”网络,以获得各种奖赏,从而推进无衍生物即时学习。薄化的网络由所有在固定的辍学战略中幸存下来的隐藏单位组成,这些单位的预测反映了对激发的培训样本的局部观点。我们的方法优于以往的无梯度即时学习方法,并在几发环境中的七种语言理解基准上实现与基于梯度的对应单位的等同。