This paper proposes a novel method for human hands tracking using data from an event camera. The event camera detects changes in brightness, measuring motion, with low latency, no motion blur, low power consumption and high dynamic range. Captured frames are analysed using lightweight algorithms reporting 3D hand position data. The chosen pick-and-place scenario serves as an example input for collaborative human-robot interactions and in obstacle avoidance for human-robot safety applications. Events data are pre-processed into intensity frames. The regions of interest (ROI) are defined through object edge event activity, reducing noise. ROI features are extracted for use in-depth perception. Event-based tracking of human hand demonstrated feasible, in real time and at a low computational cost. The proposed ROI-finding method reduces noise from intensity images, achieving up to 89% of data reduction in relation to the original, while preserving the features. The depth estimation error in relation to ground truth (measured with wearables), measured using dynamic time warping and using a single event camera, is from 15 to 30 millimetres, depending on the plane it is measured. Tracking of human hands in 3D space using a single event camera data and lightweight algorithms to define ROI features (hands tracking in space).
翻译:----
本文提出了一种新方法,使用事件相机中的数据进行人手跟踪。事件相机检测亮度变化,测量运动,具有低延迟、无运动模糊、低功耗和高动态范围等特点。通过轻量级算法,分析捕获的帧并报告3D手部位置数据。选择的拾取和放置场景作为协作人机交互和人机安全应用中的输入示例。事件数据预处理为强度帧。通过物体边缘事件活动定义感兴趣区域(ROI),以减少噪声。各ROI特征被提取用于深度感知。事件驱动的人手跟踪在实时性、计算成本和数据降噪等方面表现可行。所提出的ROI查找方法降低了强度图像中的噪声,相对于原始文件,可实现多达89%的数据降噪,同时保留特征。在单个事件相机中使用动态时间扭曲测量的与地面真实值的深度估计误差为15至30毫米,具体取决于测量平面。使用单个事件相机数据和轻量级算法在 3D 空间内跟踪人手 (tracking of human hands in 3D space using a single event camera data and lightweight algorithms to define ROI features)。