With only bounding-box annotations in the spatial domain, existing video scene text detection (VSTD) benchmarks lack temporal relation of text instances among video frames, which hinders the development of video text-related applications. In this paper, we systematically introduce a new large-scale benchmark, named as STVText4, a well-designed spatial-temporal detection metric (STDM), and a novel clustering-based baseline method, referred to as Temporal Clustering (TC). STVText4 opens a challenging yet promising direction of VSTD, termed as ST-VSTD, which targets at simultaneously detecting video scene texts in both spatial and temporal domains. STVText4 contains more than 1.4 million text instances from 161,347 video frames of 106 videos, where each instance is annotated with not only spatial bounding box and temporal range but also four intrinsic attributes, including legibility, density, scale, and lifecycle, to facilitate the community. With continuous propagation of identical texts in the video sequence, TC can accurately output the spatial quadrilateral and temporal range of the texts, which sets a strong baseline for ST-VSTD. Experiments demonstrate the efficacy of our method and the great academic and practical value of the STVText4. The dataset and code will be available soon.
翻译:在空间域中,现有的视频现场文本探测(VSTD)基准仅具有约束性,因此缺乏视频框架之间文本实例的时间关系,阻碍了视频文本相关应用的开发。在本文件中,我们系统地引入了一个新的大型基准,称为STVTText4, 一个设计完善的空间时间探测指标(STDM), 以及一个新的基于集群的基准方法,称为时空集群(TC) 。 STVTTText4 开启了VSTD具有挑战性的、但有希望的方向,称为ST-VSTD,目标是同时探测空间和时间域的视频文本。STVText4 包含超过140万个文本实例,来自106个视频的161 347个视频框,其中每个实例都有附加说明的不仅仅是空间约束框和时间范围,还有四个内在属性,包括易读性、密度、尺度和生命周期。随着视频序列中相同的文本的不断传播,TC可以准确地输出文本的空间四边际和时间范围,为ST-VTD提供强有力的基线。 实验将很快展示我们的方法和实用数据的价值。