We present OpenRoboCare, a multimodal dataset for robot caregiving, capturing expert occupational therapist demonstrations of Activities of Daily Living (ADLs). Caregiving tasks involve complex physical human-robot interactions, requiring precise perception under occlusions, safe physical contact, and long-horizon planning. While recent advances in robot learning from demonstrations have shown promise, there is a lack of a large-scale, diverse, and expert-driven dataset that captures real-world caregiving routines. To address this gap, we collect data from 21 occupational therapists performing 15 ADL tasks on two manikins. The dataset spans five modalities: RGB-D video, pose tracking, eye-gaze tracking, task and action annotations, and tactile sensing, providing rich multimodal insights into caregiver movement, attention, force application, and task execution strategies. We further analyze expert caregiving principles and strategies, offering insights to improve robot efficiency and task feasibility. Additionally, our evaluations demonstrate that OpenRoboCare presents challenges for state-of-the-art robot perception and human activity recognition methods, both critical for developing safe and adaptive assistive robots, highlighting the value of our contribution. See our website for additional visualizations: https://emprise.cs.cornell.edu/robo-care/.
翻译:我们提出了OpenRoboCare,一个用于机器人照护的多模态数据集,记录了职业治疗专家执行日常生活活动(ADLs)的示范过程。照护任务涉及复杂的人机物理交互,需要在遮挡条件下进行精确感知、实现安全的物理接触以及执行长时程规划。尽管近期基于示范的机器人学习研究已展现出潜力,但目前仍缺乏大规模、多样化且由专家驱动的数据集来捕捉真实世界的照护流程。为填补这一空白,我们采集了21位职业治疗师在两个人体模型上执行15项ADL任务的数据。该数据集涵盖五种模态:RGB-D视频、姿态追踪、视线追踪、任务与动作标注以及触觉感知,为照护者的动作模式、注意力分配、施力方式和任务执行策略提供了丰富的多维度洞察。我们进一步分析了专家照护的原则与策略,为提升机器人效率与任务可行性提供了见解。此外,我们的评估表明,OpenRoboCare对当前最先进的机器人感知与人类活动识别方法提出了挑战——这两项技术对于开发安全、自适应的辅助机器人至关重要,从而凸显了本研究的贡献价值。更多可视化内容请访问项目网站:https://emprise.cs.cornell.edu/robo-care/。