This paper presents the first implementation of autonomous robotic auscultation of heart and lung sounds. To select auscultation locations that generate high-quality sounds, a Bayesian Optimization (BO) formulation leverages visual anatomical cues to predict where high-quality sounds might be located, while using auditory feedback to adapt to patient-specific anatomical qualities. Sound quality is estimated online using machine learning models trained on a database of heart and lung stethoscope recordings. Experiments on 4 human subjects show that our system autonomously captures heart and lung sounds of similar quality compared to tele-operation by a human trained in clinical auscultation. Surprisingly, one of the subjects exhibited a previously unknown cardiac pathology that was first identified using our robot, which demonstrates the potential utility of autonomous robotic auscultation for health screening.
翻译:本文介绍了对心脏和肺部声音进行自主机器人培养的首次实施。为选择产生高质量声音的闭塞地点,贝叶西亚优化(BO)配方利用视觉解剖提示预测高质量声音的位置,同时利用听力反馈来适应病人特有的解剖质量。使用在心脏和肺部听诊器记录数据库中培训的机器学习模型在线估算了良好的质量。对4个人类主题的实验显示,我们的系统自主地捕捉了质量与临床闭塞培训人员远程操作相似的心脏和肺部声音。令人惊讶的是,其中一个对象展示了一种先前未知的心脏病理学,而该病理学最初是使用我们的机器人发现的,这显示了自主机器人疗法对健康检查的潜在效用。