When a gait of a bipedal robot is developed using deep reinforcement learning, reference trajectories may or may not be used. Each approach has its advantages and disadvantages, and the choice of method is up to the control developer. This paper investigates the effect of reference trajectories on locomotion learning and the resulting gaits. We implemented three gaits of a full-order anthropomorphic robot model with different reward imitation ratios, provided sim-to-sim control policy transfer, and compared the gaits in terms of robustness and energy efficiency. In addition, we conducted a qualitative analysis of the gaits by interviewing people, since our task was to create an appealing and natural gait for a humanoid robot. According to the results of the experiments, the most successful approach was the one in which the average value of rewards for imitation and adherence to command velocity per episode remained balanced throughout the training. The gait obtained with this method retains naturalness (median of 3.6 according to the user study) compared to the gait trained with imitation only (median of 4.0), while remaining robust close to the gait trained without reference trajectories.
翻译:暂无翻译