Reliable and cost-effective counting of people in large indoor spaces is a significant challenge with many applications. An emerging approach is to deploy multiple fisheye cameras mounted overhead to monitor the whole space. However, due to the overlapping fields of view, person re-identificaiton (PRID) is critical for the accuracy of counting. While PRID has been thoroughly researched for traditional rectilinear cameras, few methods have been proposed for fisheye cameras and their performance is comparatively lower. To close this performance gap, we propose a multi-feature framework for fisheye PRID where we combine deep-learning, color-based and location-based features by means of novel feature fusion. We evaluate the performance of our framework for various feature combinations on FRIDA, a public fisheye PRID dataset. The results demonstrate that our multi-feature approach outperforms recent appearance-based deep-learning methods by almost 18% points and location-based methods by almost 3% points in accuracy.
翻译:对大型室内空间的人进行可靠和具有成本效益的计票是一项巨大的挑战,有许多应用。一种新出现的办法是部署多架架高架的鱼眼照相机来监测整个空间。然而,由于观测领域重叠,人的身份重新识别对计算准确性至关重要。虽然对传统的直线照相机进行了彻底研究,但很少为鱼眼照相机提出方法,其性能也相对较低。为了缩小这一性能差距,我们提议了一个鱼眼PRID多功能框架,通过新特征聚合将深层学习、基于颜色和基于地点的特征结合起来。我们评估了我们在FRIIDA上的各种特征组合框架的性能。FIDA是一个公共的鱼眼PRID数据集。结果显示,我们的多功能方法比最近基于外观的深造方法高出近18%的点和基于地点的方法,精确率近3%。