Recent work (e.g. LAMA (Petroni et al., 2019)) has found that the quality of the factual information extracted from Large Language Models (LLMs) depends on the prompts used to query them. This inconsistency is problematic because different users will query LLMs for the same information using different wording, but should receive the same, accurate responses regardless. In this work we aim to address this shortcoming by introducing P-Adapters: lightweight models that sit between the embedding layer and first attention layer of LLMs. They take LLM embeddings as input and output continuous prompts that are used to query the LLM. Additionally, we investigate Mixture of Experts (MoE) models that learn a set of continuous prompts ("experts") and select one to query the LLM. They require a separate classifier trained on human-annotated data to map natural language prompts to the continuous ones. P-Adapters perform comparably to the more complex MoE models in extracting factual information from BERT and RoBERTa while eliminating the need for additional annotations. P-Adapters show between 12-26% absolute improvement in precision and 36-50% absolute improvement in consistency over a baseline of only using natural language queries. Finally, we investigate what makes P-Adapters successful and conclude that a significant factor is access to the LLM's embeddings of the original natural language prompt, particularly the subject of the entity pair being queried.
翻译:最近的工作(例如LAMA(Petroni等人,2019年))发现,从大语言模型(LLMM)中提取的事实信息的质量取决于用于查询这些模型的速率。这种不一致是成问题的,因为不同的用户会用不同的措辞就相同的信息向LLMS查询LMS,但应该收到同样准确的答复。在这项工作中,我们力求通过引入P-Adapters:位于LLMS嵌入层和第一层关注层之间的轻量级模型来解决这一缺陷。它们将LLM嵌入作为投入和产出连续提示,用于查询LLM。此外,我们调查专家混合(MoE)模型,以学习一套连续提示(“专家”),并选择一个来查询LM。他们需要专门受过人类附加说明数据培训的分类员,以绘制自然语言的速率。P-Adapters在从BERT和ROBERTA中提取比较复杂的事实信息时,尤其需要补充说明。P-Ada-Adters 精确度的绝对精确度在36-BLMEx中显示我们绝对精确度的精确度的精确度的精确度。