Natural language guided embodied task completion is a challenging problem since it requires understanding natural language instructions, aligning them with egocentric visual observations, and choosing appropriate actions to execute in the environment to produce desired changes. We experiment with augmenting a transformer model for this task with modules that effectively utilize a wider field of view and learn to choose whether the next step requires a navigation or manipulation action. We observed that the proposed modules resulted in improved, and in fact state-of-the-art performance on an unseen validation set of a popular benchmark dataset, ALFRED. However, our best model selected using the unseen validation set underperforms on the unseen test split of ALFRED, indicating that performance on the unseen validation set may not in itself be a sufficient indicator of whether model improvements generalize to unseen test sets. We highlight this result as we believe it may be a wider phenomenon in machine learning tasks but primarily noticeable only in benchmarks that limit evaluations on test splits, and highlights the need to modify benchmark design to better account for variance in model performance.
翻译:以自然语言指导的任务完成是一个具有挑战性的问题,因为它需要理解自然语言指令,使其与以自我为中心的视觉观测相匹配,并选择在环境中执行适当的行动,以产生预期的变化。我们实验的是,为这项任务增加一个变压器模型,其模块能够有效利用更广泛的视野,并学会选择下一步是否需要采取导航或操纵行动。我们注意到,拟议的模块导致改进,事实上,在一套广受欢迎的基准数据集“ALFRED”的无形验证集上取得了最先进的性能。然而,我们利用“ALFRED”的无形验证集所选择的最佳模型,在“非常规”的无形测试分解下,表明“不可见验证集”的性能本身可能不足以说明模型改进是否普遍适用于隐蔽的测试集。我们强调这一结果,因为我们认为它可能是机器学习任务中的一个更广泛的现象,但主要是在限制对测试分解的评估的基准中,并强调需要修改基准设计,以便更好地考虑到模型性能的差异。