A long-standing goal of intelligent assistants such as AR glasses/robots has been to assist users in affordance-centric real-world scenarios, such as "how can I run the microwave for 1 minute?". However, there is still no clear task definition and suitable benchmarks. In this paper, we define a new task called Affordance-centric Question-driven Task Completion, where the AI assistant should learn from instructional videos to provide step-by-step help in the user's view. To support the task, we constructed AssistQ, a new dataset comprising 531 question-answer samples from 100 newly filmed instructional videos. We also developed a novel Question-to-Actions (Q2A) model to address the AQTC task and validate it on the AssistQ dataset. The results show that our model significantly outperforms several VQA-related baselines while still having large room for improvement. We expect our task and dataset to advance Egocentric AI Assistant's development. Our project page is available at: https://showlab.github.io/assistq/.
翻译:智能助手(如AR Glass/robots)的长期目标一直是协助用户提供以价格为中心的真实世界情景,如“我如何运行微波1分钟?” 。 然而,任务定义和基准仍然不够明确。 在本文件中,我们定义了一个新的任务,即“以价格为中心、以问题驱动的任务完成”,在这个任务中,AI助理应当从教学视频中学习,为用户提供分步骤的帮助。为了支持这项任务,我们建造了援助Q,这是一个新的数据集,由100个新拍摄的教学视频中的531个问答样本组成。我们还开发了一个创新的“问题对行动”模型,用于处理AQTC的任务,并在援助Q数据集上验证它。结果显示,我们的模型大大超过几个与VQA相关的基线,同时仍有很大的改进空间。我们期望我们的任务和数据集能够推进Egocental AI助理的开发。我们的项目网页可以查到: https://showlab.github.io/ARIq/。