Temporal grounding aims to predict a time interval of a video clip corresponding to a natural language query input. In this work, we present EVOQUER, a temporal grounding framework incorporating an existing text-to-video grounding model and a video-assisted query generation network. Given a query and an untrimmed video, the temporal grounding model predicts the target interval, and the predicted video clip is fed into a video translation task by generating a simplified version of the input query. EVOQUER forms closed-loop learning by incorporating loss functions from both temporal grounding and query generation serving as feedback. Our experiments on two widely used datasets, Charades-STA and ActivityNet, show that EVOQUER achieves promising improvements by 1.05 and 1.31 at R@0.7. We also discuss how the query generation task could facilitate error analysis by explaining temporal grounding model behavior.
翻译:在这项工作中,我们介绍了EVOQUER,这是一个包含现有文本到视频地面模型和视频辅助查询网络的时间基框架。鉴于一个查询和一个未剪断的视频,时间基模型预测了目标间隔,预测视频剪辑通过生成输入查询的简化版本被注入视频翻译任务。 EVOQUER通过纳入时间地面和查询生成方面的损失功能作为反馈,形成了闭路学习。我们在两个广泛使用的数据集(Charades-STA和ActionNet)上进行的实验显示,EVOQUER在1.05和1.31(R@0.7)上取得了大有希望的改进。我们还讨论了如何通过解释时间地面模型行为,使生成查询的任务促进错误分析。