Aligning large language models with multiple human expectations and values is crucial for ensuring that they adequately serve a variety of user needs. To this end, offline multiobjective alignment algorithms such as the Rewards-in-Context algorithm have shown strong performance and efficiency. However, inappropriate preference representations and training with imbalanced reward scores limit the performance of such algorithms. In this work, we introduce ParetoHqD that addresses the above issues by representing human preferences as preference directions in the objective space and regarding data near the Pareto front as "high-quality" data. For each preference, ParetoHqD follows a two-stage supervised fine-tuning process, where each stage uses an individual Pareto high-quality training set that best matches its preference direction. The experimental results have demonstrated the superiority of ParetoHqD over five baselines on two multiobjective alignment tasks.
翻译:使大语言模型与多元人类期望及价值观对齐,对于确保其充分满足多样化用户需求至关重要。为此,离线多目标对齐算法(如上下文奖励算法)已展现出优异的性能与效率。然而,不恰当的偏好表征以及使用不平衡奖励分数进行训练,限制了此类算法的性能。本研究提出ParetoHqD方法,通过将人类偏好表征为目标空间中的偏好方向,并将帕累托前沿附近的数据视为“高质量”数据,以解决上述问题。针对每一偏好方向,ParetoHqD采用两阶段监督微调流程,其中每个阶段均使用与其偏好方向最匹配的独立帕累托高质量训练集。实验结果表明,在两项多目标对齐任务上,ParetoHqD均优于五种基线方法。