An important question in deploying large language models (LLMs) is how to augment LLMs with private data. We propose Differentially Private In-context Learning (DP-ICL) to enable LLMs to adapt to new tasks while maintaining privacy guarantees. DP-ICL performs private inference by establishing noisy consensus over an ensemble of exemplars using the Report-Noisy-Max mechanism. We evaluate DP-ICL on four benchmarks and find that it achieves comparable performance (<2\% degradation) with non-private ICL.
翻译:在部署大型语言模型(LLMs)时一个重要问题是如何使用私有数据来增强LLMs。我们提出了差分隐私上下文学习(DP-ICL)来实现LLMs适应新任务的同时保持隐私保障。DP-ICL通过使用报告-嘈杂最大机制,在示例模型组中建立嘈杂的共识进行隐私推断。我们在四个基准测试中评估DP-ICL,发现它实现了与非私有ICL相当的性能(<2%的降级)。