We focus on an unloading problem, typical of the logistics sector, modeled as a sequential pick-and-place task. In this type of task, modern machine learning techniques have shown to work better than classic systems since they are more adaptable to stochasticity and better able to cope with large uncertainties. More specifically, supervised and imitation learning have achieved outstanding results in this regard, with the shortcoming of requiring some form of supervision which is not always obtainable for all settings. On the other hand, reinforcement learning (RL) requires much milder form of supervision but still remains impracticable due to its inefficiency. In this paper, we propose and theoretically motivate a novel Unsupervised Reward Shaping algorithm from expert's observations which relaxes the level of supervision required by the agent and works on improving RL performance in our task.
翻译:我们关注的是卸货问题,物流部门典型的典型问题,以顺序选择和定位任务为模式。在这类任务中,现代机器学习技术比经典系统效果更好,因为它们更适应随机性,更能够应对巨大的不确定性。更具体地说,监督和模仿学习在这方面取得了突出的成果,缺点是需要某种形式的监督,而这种监督并非在所有情况下都能获得。另一方面,强化学习(RL)需要更温和得多的监督形式,但由于效率低下,仍然不切实际。在本文中,我们建议并理论上激励专家观察的新颖的、不受监督的 " Reward 形状算法 ",该算法放松了代理人所要求的监督水平,并努力改进RL的工作业绩。