The field of mobile, wearable, and ubiquitous computing (UbiComp) is undergoing a revolutionary integration of machine learning. Devices can now diagnose diseases, predict heart irregularities, and unlock the full potential of human cognition. However, the underlying algorithms are not immune to biases with respect to sensitive attributes (e.g., gender, race), leading to discriminatory outcomes. The research communities of HCI and AI-Ethics have recently started to explore ways of reporting information about datasets to surface and, eventually, counter those biases. The goal of this work is to explore the extent to which the UbiComp community has adopted such ways of reporting and highlight potential shortcomings. Through a systematic review of papers published in the Proceedings of the ACM Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) journal over the past 5 years (2018-2022), we found that progress on algorithmic fairness within the UbiComp community lags behind. Our findings show that only a small portion (5%) of published papers adheres to modern fairness reporting, while the overwhelming majority thereof focuses on accuracy or error metrics. In light of these findings, our work provides practical guidelines for the design and development of ubiquitous technologies that not only strive for accuracy but also for fairness.
翻译:移动、可穿戴和普适计算领域正在经历机器学习的革命性整合。设备现在可以诊断疾病、预测心脏不规律性,以及释放出人类认知的全部潜力。然而,与敏感属性(例如性别、种族)相关的偏见可能导致歧视性结果,底层算法也不能免疫。人机交互和AI伦理学的研究社区最近开始探索报告关于数据集的信息,以揭示和最终抵消这些偏差。这项工作的目标是探讨普适计算社区采用这种报告的程度,并突出潜在的缺点。通过对过去5年(2018-2022)发表在ACM互动、移动、可穿戴和普适技术(IMWUT)期刊上的论文进行系统性回顾,我们发现在算法公平性上,普适计算社区的进展落后于其他领域。我们的研究结果显示,仅有一小部分(5%)的发表论文遵循现代公平性报告,而其中压倒性大多数则侧重于准确度或错误度量。在这些发现的基础上,我们的工作提供了设计和开发既追求准确度又追求公平性的普适技术的实用指南。