Robotic agents that operate autonomously in the real world need to continuously explore their environment and learn from the data collected, with minimal human supervision. While it is possible to build agents that can learn in such a manner without supervision, current methods struggle to scale to the real world. Thus, we propose ALAN, an autonomously exploring robotic agent, that can perform tasks in the real world with little training and interaction time. This is enabled by measuring environment change, which reflects object movement and ignores changes in the robot position. We use this metric directly as an environment-centric signal, and also maximize the uncertainty of predicted environment change, which provides agent-centric exploration signal. We evaluate our approach on two different real-world play kitchen settings, enabling a robot to efficiently explore and discover manipulation skills, and perform tasks specified via goal images. Website at https://robo-explorer.github.io/
翻译:在现实世界中自主操作的机器人代理人需要不断探索其环境并从所收集的数据中学习,而人类的监督则最少。虽然可以建立能够在没有监督的情况下以这种方式学习的代理人,但目前的方法难以推广到现实世界。因此,我们提议建立自动探索机器人代理人AN,这个机器人代理人可以在现实世界中以很少的培训和互动时间执行任务。这可以通过测量环境变化而得以实现,环境变化反映物体的移动,忽视机器人位置的变化。我们直接使用这一指标作为以环境为中心的信号,并尽量扩大预测环境变化的不确定性,它提供以代理人为中心的探索信号。我们评估了两种不同的真实世界游戏厨房环境,使机器人能够有效地探索和发现操纵技能,并通过目标图像执行指定的任务。https://robo-explorer.github.io/网站:https://robo-explorer.github.