With the advance of language models, privacy protection is receiving more attention. Training data extraction is therefore of great importance, as it can serve as a potential tool to assess privacy leakage. However, due to the difficulty of this task, most of the existing methods are proof-of-concept and still not effective enough. In this paper, we investigate and benchmark tricks for improving training data extraction using a publicly available dataset. Because most existing extraction methods use a pipeline of generating-then-ranking, i.e., generating text candidates as potential training data and then ranking them based on specific criteria, our research focuses on the tricks for both text generation (e.g., sampling strategy) and text ranking (e.g., token-level criteria). The experimental results show that several previously overlooked tricks can be crucial to the success of training data extraction. Based on the GPT-Neo 1.3B evaluation results, our proposed tricks outperform the baseline by a large margin in most cases, providing a much stronger baseline for future research.
翻译:由于语言模型的进步,隐私保护受到更多的关注。因此,培训数据提取非常重要,因为它可以成为评估隐私泄漏的潜在工具。然而,由于这项工作的困难,大多数现有方法都是概念证明,仍然不够有效。在本文件中,我们调查了利用公开数据集改进培训数据提取的技巧,并将这些技巧作为基准。由于大多数现有提取方法都利用了产生-定级的管道,即生成文本候选数据作为潜在的培训数据,然后根据具体标准对其进行排序,我们的研究侧重于文本生成(例如抽样战略)和文本排序(例如象征性标准)的技巧。实验结果显示,一些以前被忽视的技巧对于培训数据提取的成功至关重要。根据GPT-Neo 1.3B的评估结果,我们提出的技巧在多数情况下大大超越了基线,为未来的研究提供了更强大的基线。