Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform diverse NLP tasks, especially when only few downstream data are available. Automatically finding the optimal prompt for each task, however, is challenging. Most existing work resorts to tuning soft prompt (e.g., embeddings) which falls short of interpretability, reusability across LMs, and applicability when gradients are not accessible. Discrete prompt, on the other hand, is difficult to optimize, and is often created by "enumeration (e.g., paraphrasing)-then-selection" heuristics that do not explore the prompt space systematically. This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL). RLPrompt formulates a parameter-efficient policy network that generates the desired discrete prompt after training with reward. To overcome the complexity and stochasticity of reward signals by the large LM environment, we incorporate effective reward stabilization that substantially enhances the training efficiency. RLPrompt is flexibly applicable to different types of LMs, such as masked (e.g., BERT) and left-to-right models (e.g., GPTs), for both classification and generation tasks. Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing finetuning or prompting methods. Interestingly, the resulting optimized prompts are often ungrammatical gibberish text; and surprisingly, those gibberish prompts are transferrable between different LMs to retain significant performance, indicating LM prompting may not follow human language patterns.
翻译:提示显示在使大型预先培训的语言模型(LMS)能够执行不同的 NLP 任务方面取得了令人印象深刻的成功, 特别是在只有很少下游数据的情况下。 自动找到每项任务的最佳速度是具有挑战性的。 多数现有工作都采用调试低于可解释性、跨LMS的可重复性以及梯度无法获取时的可应用性等软性( 例如嵌入性) 。 另一方面, 分解性能很难优化, 并且往往由“ 计算( 例如, parphraising) - 即时选择” 等“ 精度精度精度( 例如, parphreasing) 来创建, 特别是当下游数据时, 特别是当下游数据显示精度( 精度) 时, 精度( 精度 精度( 精度) 精度( 精度) 精度( 精度) 精度( 精度) 精度( 精度) 精度( 精度) 精度( 精度) 精度( 精度) 精度( 精度) 精度( 精度) 精度( 精度( 精度) ( 精度) 精度( 精度) ( LRLLPM) ) ( ) ( ) ) 和 精度( ) 精度( 精度( ) ) 精度( ) ) ) ( 精度( 精度( 精度) ) ( ) ( ) ( ) ( 精度) ( ) ( ) ( ) ) ( ) ( ) ( ) ( ) ( ) ( ) ( 精度) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ) ( ) ( ) ( 精度( ) ( 精度) ( ) ( 精度) ( ) ( ) ( ) ( ) ( 精度) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) (