Large language models (LLMs) have revolutionized the role of AI, yet pose potential social risks. To steer LLMs towards human preference, alignment technologies have been introduced and gained increasing attention. Nevertheless, existing methods heavily rely on high-quality positive-negative training pairs, suffering from noisy positive responses that are barely distinguishable from negative ones. Given recent LLMs' proficiency in generating helpful responses, this work pivots towards a new research question: can we achieve alignment using solely human-annotated negative samples, preserving helpfulness while reducing harmfulness? For this purpose, we propose Distributional Dispreference Optimization (D$^2$O), which maximizes the discrepancy between dispreferred responses and the generated non-negative ones. In this way, D$^2$O effectively eschews harmful information without incorporating noisy positive samples, while avoiding collapse using self-generated responses as anchors. We demonstrate that D$^2$O can be regarded as learning a distributional preference model reflecting human dispreference against negative responses, which is theoretically an upper bound of the instance-level DPO. Extensive experiments manifest that our method achieves comparable generation quality and surpasses the latest strong baselines in producing less harmful and more informative responses with better training stability and faster convergence.
翻译:暂无翻译