Global models are trained to be as generalizable as possible, with user invariance considered desirable since the models are shared across multitudes of users. As such, these models are often unable to produce personalized responses for individual users, based on their data. Contrary to widely-used personalization techniques based on few-shot learning, we propose UserIdentifier, a novel scheme for training a single shared model for all users. Our approach produces personalized responses by adding fixed, non-trainable user identifiers to the input data. We empirically demonstrate that this proposed method outperforms the prefix-tuning based state-of-the-art approach by up to 13%, on a suite of sentiment analysis datasets. We also show that, unlike prior work, this method needs neither any additional model parameters nor any extra rounds of few-shot fine-tuning.
翻译:全球模型经过培训,可以尽可能普及,用户的偏差被认为是可取的,因为这些模型由众多的用户共享。因此,这些模型往往无法根据个人用户的数据,为个人用户提供个性化的响应。与根据微小的学习而广泛使用的个性化技术相反,我们提出了用户识别软件,这是一个为所有用户培训单一共享模型的新计划。我们的方法通过在输入数据中添加固定的、不可培训的用户标识符,产生个性化响应。我们从经验上证明,这一拟议方法在一套情绪分析数据集的基础上,比基于最先进的前型调整方法高出高达13%。我们还表明,与以往的工作不同,这一方法不需要任何额外的模型参数,也不需要任何额外的几发微小微调。