An interactive instruction following task has been proposed as a benchmark for learning to map natural language instructions and first-person vision into sequences of actions to interact with objects in 3D environments. We found that an existing end-to-end neural model for this task tends to fail to interact with objects of unseen attributes and follow various instructions. We assume that this problem is caused by the high sensitivity of neural feature extraction to small changes in vision and language inputs. To mitigate this problem, we propose a neuro-symbolic approach that utilizes high-level symbolic features, which are robust to small changes in raw inputs, as intermediate representations. We verify the effectiveness of our model with the subtask evaluation on the ALFRED benchmark. Our experiments show that our approach significantly outperforms the end-to-end neural model by 9, 46, and 74 points in the success rate on the ToggleObject, PickupObject, and SliceObject subtasks in unseen environments respectively.
翻译:我们发现,目前用于这项任务的端到端神经模型往往无法与不可见的属性对象相互作用,并遵循各种指示。我们假定,这一问题是由于神经特征提取对视觉和语言投入的小变化高度敏感造成的。为缓解这一问题,我们建议采用神经-同步方法,利用高层次的象征特征,这些特征对于原始投入的微小变化是强大的,作为中间表示。我们用对ALFRED基准的子任务评价来验证我们的模型的有效性。我们的实验表明,我们的方法大大超过在看不见环境中TagleObject、PickupObject和SliceObject 成功率的9、46和74点。