Multiple existing benchmarks involve tracking and segmenting objects in video e.g., Video Object Segmentation (VOS) and Multi-Object Tracking and Segmentation (MOTS), but there is little interaction between them due to the use of disparate benchmark datasets and metrics (e.g. J&F, mAP, sMOTSA). As a result, published works usually target a particular benchmark, and are not easily comparable to each another. We believe that the development of generalized methods that can tackle multiple tasks requires greater cohesion among these research sub-communities. In this paper, we aim to facilitate this by proposing BURST, a dataset which contains thousands of diverse videos with high-quality object masks, and an associated benchmark with six tasks involving object tracking and segmentation in video. All tasks are evaluated using the same data and comparable metrics, which enables researchers to consider them in unison, and hence, more effectively pool knowledge from different methods across different tasks. Additionally, we demonstrate several baselines for all tasks and show that approaches for one task can be applied to another with a quantifiable and explainable performance difference. Dataset annotations and evaluation code is available at: https://github.com/Ali2500/BURST-benchmark.
翻译:因此,出版的作品通常针对某一特定基准,不易相互比较。我们认为,制定能够处理多重任务的通用方法要求这些研究子群之间更大的凝聚力。在本文件中,我们的目标是通过提出BURST(一个包含数千个带有高质量对象面具的不同视频的数据集)和与视频中涉及目标跟踪和分割的六项任务相关的基准(如J&F、MAP、sMOTSA)来推动这项工作。所有任务都使用相同的数据和可比较的衡量标准进行评估,使研究人员能够将不同任务的不同方法的知识结合起来,从而更有效地汇集起来。此外,我们展示了所有任务的若干基线,并表明一项任务的方法可以适用于另一项任务,并有可量化和可解释的绩效差异。Dataset 说明和评估代码可在 http:// httpsetal-libus/Asmark/Asmark/BUR0上查到。