While PLMs have been widely used to generate high-quality texts in a supervised manner (by imitating texts written by humans), they lack a mechanism for generating texts that directly optimize a given reward, e.g., given user feedback like user clicks or a criterion that cannot be directly optimized by using gradient descent. In real-world applications, we usually wish to achieve more than just imitating existing texts. For example, we may wish to generate more attractive texts that lead to increased user clicks, more diversified texts to improve user experience, and more personalized texts that are better tailored to user tastes. Combing RL with PLMs provides a unified solution for all these scenarios, and is the core for machines to achieve human parity in text generation. Such a method has the potential to be applied in a wide range of products, e.g., Microsoft Advertising (text ad generation), Microsoft News (news headline generation), and Microsoft Stores and Xbox (optimizing the description for recommended items).
In this project, we aim to study how pretrained language models (PLMs) can be enhanced by using deep reinforcement learning (RL) to generate attractive and high-quality text ads. While finetuning PLMs have been shown to be able to generate high-quality texts, RL additionally provides a principled way to directly optimize user feedback (e.g., user clicks) for improving attractiveness. Our initial RL method UMPG is deployed in Dynamic Search Ads and published in KDD 2021. We wish to extend the method so that it can work for all pretrained language models (in addition to UNILM) and study how the technique can benefit other important Microsoft Advertising products and international markets.