Reinforcement learning (RL) is already widely applied to applications such as robotics, but it is only sparsely used in sensor management. In this paper, we apply the popular Proximal Policy Optimization (PPO) approach to a multi-agent UAV tracking scenario. While recorded data of real scenarios can accurately reflect the real world, the required amount of data is not always available. Simulation data, however, is typically cheap to generate, but the utilized target behavior is often naive and only vaguely represents the real world. In this paper, we utilize multi-agent RL to jointly generate protagonistic and antagonistic policies and overcome the data generation problem, as the policies are generated on-the-fly and adapt continuously. This way, we are able to clearly outperform baseline methods and robustly generate competitive policies. In addition, we investigate explainable artificial intelligence (XAI) by interpreting feature saliency and generating an easy-to-read decision tree as a simplified policy.
翻译:强化学习(RL)已经广泛应用于机器人等应用,但在传感器管理中却很少使用。在本文中,我们对多试剂无人驾驶飞行器跟踪方案采用流行的准政策优化(PPO)方法。虽然所记录的真实情景数据可以准确地反映真实世界,但并不总能提供所需的数据量。模拟数据通常很便宜,但被利用的目标行为往往很天真,而且只是模糊地代表真实世界。在本文中,我们利用多试剂RL联合生成主导和对立政策并克服数据生成问题,因为政策是实时生成并不断调整的。这样,我们就能清晰地超越基线方法并强有力地生成竞争性政策。此外,我们通过解释特征突出特征并生成易于阅读的决策树来调查可解释的人工智能(XAI)作为简化政策。