Multiview detection uses multiple calibrated cameras with overlapping fields of views to locate occluded pedestrians. In this field, existing methods typically adopt a ``human modeling - aggregation'' strategy. To find robust pedestrian representations, some intuitively use locations of detected 2D bounding boxes, while others use entire frame features projected to the ground plane. However, the former does not consider human appearance and leads to many ambiguities, and the latter suffers from projection errors due to the lack of accurate height of the human torso and head. In this paper, we propose a new pedestrian representation scheme based on human point clouds modeling. Specifically, using ray tracing for holistic human depth estimation, we model pedestrians as upright, thin cardboard point clouds on the ground. Then, we aggregate the point clouds of the pedestrian cardboard across multiple views for a final decision. Compared with existing representations, the proposed method explicitly leverages human appearance and reduces projection errors significantly by relatively accurate height estimation. On two standard evaluation benchmarks, the proposed method achieves very competitive results.
翻译:多视图探测使用多校准相机,其视野范围重叠,以定位隐蔽行人。 在这一领域,现有方法通常采用“ 人类模型- 聚合” 战略。 要找到强健的行人代表, 某些人直觉地使用已检测到的 2D 捆绑箱的位置, 而另一些人则使用向地面飞机预测的全框架特征。 然而, 前者不考虑人的外观,导致许多模糊不清, 而后者由于人的身体和头部的高度不准确, 因而存在预测错误。 在本文中, 我们提出一个新的行人代表方案, 其依据是人类点云的模型。 具体地说, 利用光线跟踪来进行整体人类深度估计, 我们将行人模拟为直立的、 薄的纸板点云。 然后, 我们将行人纸板的点云加在一起, 以便做出最后的决定。 与现有的表达方法相比, 拟议的方法明确利用人类外观, 并通过相对准确的高度估计大大降低预测错误。 在两个标准评价基准上,, 拟议的方法取得了非常有竞争力的结果 。