Fine-tuning large language models (LLMs) using diverse datasets is crucial for enhancing their overall performance across various domains. In practical scenarios, existing methods based on modeling the mixture proportions of data composition often struggle with data whose domain labels are missing, imprecise or non-normalized, while methods based on data selection usually encounter difficulties in balancing multi-domain performance. To address these challenges, in this work, we investigate the role of data diversity in enhancing the overall abilities of LLMs by empirically constructing contrastive data pools and theoretically deriving explanations. Building upon the insights gained, we propose a new method that gives the LLM a dual identity: an output model to cognitively probe and select data based on diversity reward, as well as an input model to be tuned with the selected data. Extensive experiments show that the proposed method notably boosts performance across domain-undetermined data and a series of foundational downstream tasks when applied to various advanced LLMs. We release our code and hope this study can shed light on the understanding of data diversity and advance feedback-driven data-model co-design for LLMs.
翻译:使用多样化数据集微调大语言模型(LLMs)对于提升其在各领域的综合性能至关重要。在实际场景中,现有基于数据构成混合比例建模的方法常因数据缺乏领域标签、标注不精确或未标准化而面临困难,而基于数据选择的方法通常难以平衡多领域性能。为应对这些挑战,本研究通过实证构建对比数据池与理论推导解释,探究数据多样性在增强LLMs综合能力中的作用。基于所得洞见,我们提出一种新方法,赋予LLMs双重身份:作为输出模型基于多样性奖励进行认知探测与数据选择,同时作为输入模型使用所选数据进行微调。大量实验表明,该方法应用于多种先进LLMs时,能显著提升其在领域未确定数据及一系列基础下游任务上的性能。我们公开了代码,期望本研究能为理解数据多样性提供启示,并推动LLMs的反馈驱动型数据-模型协同设计。