Reinforcement Learning (RL) has proven highly effective in aligning Large Language Models (LLMs) with human preferences. Typical RL methods optimize under an overall sequence reward, which can lead to a suboptimal learning process. This reflects a key credit assignment problem: identifying which tokens to reinforce or suppress. To rectify these shortcomings, step-wise and token-wise methods have been proposed. However, step-wise methods rely on punctuation segmentation and still cannot accurately identify the key tokens. The token-level approach is too fine-grained, attending to many unimportant tokens and thus introducing a large amount of noise. To assign more accurate rewards to different tokens, improving credit assignment, we propose the "Adaptive Segment-wise Reward" method. We employ semantic meaning, rather than punctuation, to adaptively delineate segments. Experiments demonstrate that our method can be integrated into various training methods. Compared to training methods \textit{without} our approach, our method improves the success rate on adversarial samples by 10\%, and achieves a 1.3\% improvement on evaluation benchmarks such as MMLU, GSM8K, HumanEval, etc.
翻译:暂无翻译