Crowd counting in single-view images has achieved outstanding performance on existing counting datasets. However, single-view counting is not applicable to large and wide scenes (e.g., public parks, long subway platforms, or event spaces) because a single camera cannot capture the whole scene in adequate detail for counting, e.g., when the scene is too large to fit into the field-of-view of the camera, too long so that the resolution is too low on faraway crowds, or when there are too many large objects that occlude large portions of the crowd. Therefore, to solve the wide-area counting task requires multiple cameras with overlapping fields-of-view. In this paper, we propose a deep neural network framework for multi-view crowd counting, which fuses information from multiple camera views to predict a scene-level density map on the ground-plane of the 3D world. We consider three versions of the fusion framework: the late fusion model fuses camera-view density map; the naive early fusion model fuses camera-view feature maps; and the multi-view multi-scale early fusion model ensures that features aligned to the same ground-plane point have consistent scales. A rotation selection module further ensures consistent rotation alignment of the features. We test our 3 fusion models on 3 multi-view counting datasets, PETS2009, DukeMTMC, and a newly collected multi-view counting dataset containing a crowded street intersection. Our methods achieve state-of-the-art results compared to other multi-view counting baselines.
翻译:单视图像中的人群计数在现有的计数数据集上取得了杰出的成绩。 然而,单视计数不适用于大片和大片场景(如公共公园、长地铁平台或活动空间),因为单个相机无法足够详细地记录整个场景以进行计数,例如,当场景太大,无法与摄影机的实地视图相容时,时间太长,以致在遥远的人群中分辨率太低,或者有太多的大型物体在人群中隐蔽。因此,要解决广区域计数任务,需要多摄像头和重叠的多视场。在本文中,我们提议一个深视神经网络框架,用于多视人群计数,将多摄像头观点的信息集中起来,以预测3D世界地面平板上的场水平密度图。我们考虑了三个版本的混集框架:迟聚模型连接了摄像群-视野密度地图;天速早期混凝模模模模集模型的摄像场特征地图;多视多视多视场的早期组合早期模型,确保我们连续进行3个数据轮换的模型。