Humans and other intelligent animals evolved highly sophisticated perception systems that combine multiple sensory modalities. On the other hand, state-of-the-art artificial agents rely mostly on visual inputs or structured low-dimensional observations provided by instrumented environments. Learning to act based on combined visual and auditory inputs is still a new topic of research that has not been explored beyond simple scenarios. To facilitate progress in this area we introduce a new version of VizDoom simulator to create a highly efficient learning environment that provides raw audio observations. We study the performance of different model architectures in a series of tasks that require the agent to recognize sounds and execute instructions given in natural language. Finally, we train our agent to play the full game of Doom and find that it can consistently defeat a traditional vision-based adversary. We are currently in the process of merging the augmented simulator with the main ViZDoom code repository. Video demonstrations and experiment code can be found at https://sites.google.com/view/sound-rl.
翻译:人类和其他智能动物发展了高度尖端的认知系统,这些系统结合多种感官模式。另一方面,最先进的人工剂主要依靠仪器环境提供的视觉投入或结构化的低维观测。学习以视觉和听力综合投入为基础采取行动仍然是一个新的研究课题,除了简单的情景外,还没有加以探讨。为了促进这个领域的进展,我们引进了一个新的版本的VizDoom模拟器,以创造一个高效的学习环境,提供原始的音频观测。我们研究不同模型结构在一系列任务中的性能,这些任务要求代理人识别声音并执行自然语言的指示。最后,我们培训我们的代理人来玩游戏Doom,发现它能够一贯击败传统的视觉对手。我们正在将增强的模拟器与主要的 VizDoom 代码库合并。视频演示和实验代码可以在 https://sites.google.com/view/sound-rl 找到。