Traditional approaches to the design of multi-agent navigation algorithms consider the environment as a fixed constraint, despite the obvious influence of spatial constraints on agents' performance. Yet hand-designing improved environment layouts and structures is inefficient and potentially expensive. The goal of this paper is to consider the environment as a decision variable in a system-level optimization problem, where both agent performance and environment cost can be accounted for. We begin by proposing a novel environment optimization problem. We show, through formal proofs, under which conditions the environment can change while guaranteeing completeness (i.e., all agents reach their navigation goals). Our solution leverages a model-free reinforcement learning approach. In order to accommodate a broad range of implementation scenarios, we include both online and offline optimization, and both discrete and continuous environment representations. Numerical results corroborate our theoretical findings and validate our approach.
翻译:设计多试剂导航算法的传统方法将环境视为固定的制约因素,尽管空间限制对代理人的性能有明显的影响。然而,手工设计改进的环境布局和结构效率低,而且可能昂贵。本文件的目的是将环境视为系统一级优化问题中的一个决定变量,其中可以计算代理人的性能和环境成本。我们首先提出一个新的环境优化问题。我们通过正式证明表明环境在何种条件下可以改变,同时又能保证完整性(即所有代理人都达到其导航目标)。我们的解决办法利用了一种无模型的强化学习方法。为了适应广泛的执行设想,我们包括了在线和离线优化以及离散和连续的环境表现。数字结果证实了我们的理论结论并证实了我们的做法。