Multiview detection uses multiple calibrated cameras with overlapping fields of views to locate occluded pedestrians. In this field, existing methods typically adopt a "human modeling - aggregation" strategy. To find robust pedestrian representations, some intuitively use locations of detected 2D bounding boxes, while others use entire frame features projected to the ground plane. However, the former does not consider human appearance and leads to many ambiguities, and the latter suffers from projection errors due to the lack of accurate height of the human torso and head. In this paper, we propose a new pedestrian representation scheme based on human point clouds modeling. Specifically, using ray tracing for holistic human depth estimation, we model pedestrians as upright, thin cardboard point clouds on the ground. Then, we aggregate the point clouds of the pedestrian cardboard across multiple views for a final decision. Compared with existing representations, the proposed method explicitly leverages human appearance and reduces projection errors significantly by relatively accurate height estimation. On two standard evaluation benchmarks, the proposed method achieves very competitive results.
翻译:多视图探测使用多个校准相机,其视野范围重叠,以定位隐蔽行人。在这一领域,现有方法通常采用“人造模型-聚合”战略。为了找到强健的行人代表,有些直觉使用已检测到的2D捆绑箱的位置,而另一些则使用向地面平面预测的完整框架特征。然而,前者不考虑人的外观,导致许多模糊不清,而后者由于人的身体和头部的高度不准确,而存在预测错误。在本文中,我们提议了一个新的行人代表方案,其依据是人类点云建模。具体地说,我们用射线跟踪来进行整体人类深度估计,将行人模拟为直立的、薄的纸板点云。然后,我们将行人纸板的点云汇集到多个视图中,以便作出最后决定。与现有的表达方法相比,拟议方法明确利用人的外观,并通过相对准确的高度估计大大降低预测错误。在两个标准评价基准上,拟议方法取得了非常有竞争力的结果。