A critical problem with the practical utility of controllers trained with deep Reinforcement Learning (RL) is the notable lack of smoothness in the actions learned by the RL policies. This trend often presents itself in the form of control signal oscillation and can result in poor control, high power consumption, and undue system wear. We introduce Conditioning for Action Policy Smoothness (CAPS), an effective yet intuitive regularization on action policies, which offers consistent improvement in the smoothness of the learned state-to-action mappings of neural network controllers, reflected in the elimination of high-frequency components in the control signal. Tested on a real system, improvements in controller smoothness on a quadrotor drone resulted in an almost 80% reduction in power consumption while consistently training flight-worthy controllers. Project website: http://ai.bu.edu/caps
翻译:受过深强化学习培训的控制员的实际效用存在一个关键问题,就是该控制员的政策所学到的行动明显缺乏顺利性,这一趋势往往表现为控制信号振动,可能导致控制信号震荡不力、电耗高和系统不适当磨损。我们引入了“为行动提供条件,使行动政策保持平稳”(CAPS),这是对行动政策的有效但直觉的规范化,不断改进神经网络控制员所学到的国家对行动绘图的顺利性,这反映在消除控制信号中的高频部件上。在实际系统上测试,一个二次无人机控制员的平稳性改善导致电耗减少近80%,同时不断培训适航控制员。项目网站:http://ai.bu.edu/caps。项目网站:http://ai.bu.edu/caps。