This report describes Badgers@UW-Madison, our submission to the Ego4D Natural Language Queries (NLQ) Challenge. Our solution inherits the point-based event representation from our prior work on temporal action localization, and develops a Transformer-based model for video grounding. Further, our solution integrates several strong video features including SlowFast, Omnivore and EgoVLP. Without bells and whistles, our submission based on a single model achieves 12.64% Mean R@1 and is ranked 2nd on the public leaderboard. Meanwhile, our method garners 28.45% (18.03%) R@5 at tIoU=0.3 (0.5), surpassing the top-ranked solution by up to 5.5 absolute percentage points.
翻译:本报告描述了Badgers@UW-Madison, 我们提交 Ego4D 自然语言查询( NLQ) 的挑战。 我们的解决方案继承了我们先前关于时间行动本地化的工作的点基事件代表, 并开发了一个基于变异器的视频定位模型。 此外, 我们的解决方案整合了几个强大的视频功能, 包括慢速、 Omnivore 和 EgoVLP。 没有钟声和哨子, 我们基于单一模式的提交实现了12.64%的平均值R@1, 并在公共领导板上排名第二 。 与此同时, 我们的方法获得了28.45% ( 18.03% ) R@5 在tIOU=0. 3 ( 0. 5), 超过顶级解决方案的5.5 绝对百分点。