Smart home environments are designed to provide services that help improve the quality of life for the occupant via a variety of sensors and actuators installed throughout the space. Many automated actions taken by a smart home are governed by the output of an underlying activity recognition system. However, activity recognition systems may not be perfectly accurate and therefore inconsistencies in smart home operations can lead a user to wonder "why did the smart home do that?" In this work, we build on insights from Explainable Artificial Intelligence (XAI) techniques to contribute computational methods for explainable activity recognition. Specifically, we generate explanations for smart home activity recognition systems that explain what about an activity led to the given classification. To do so, we introduce four computational techniques for generating natural language explanations of smart home data and compare their effectiveness at generating meaningful explanations. Through a study with everyday users, we evaluate user preferences towards the four explanation types. Our results show that the leading approach, SHAP, has a 92% success rate in generating accurate explanations. Moreover, 84% of sampled scenarios users preferred natural language explanations over a simple activity label, underscoring the need for explainable activity recognition systems. Finally, we show that explanations generated by some XAI methods can lead users to lose confidence in the accuracy of the underlying activity recognition model, while others lead users to gain confidence. Taking all studied factors into consideration, we make a recommendation regarding which existing XAI method leads to the best performance in the domain of smart home automation, and discuss a range of topics for future work in this area.
翻译:智能家庭环境的设计旨在通过在空间各地安装的各种传感器和动画器,提供有助于改善居住者的生活质量的服务。智能家庭采取的许多自动化行动受基本活动识别系统产出的制约。然而,活动识别系统可能不完全准确,因此智能家庭操作的不一致性使用户怀疑“智能家庭为何这样做?”在这项工作中,我们利用可解释人工智能智能智能(XAI)技术的洞察力,为可解释活动识别提供计算方法。具体地说,我们为智能家庭活动识别系统提供解释,解释某项活动导致给定的分类。为此,我们引入四种计算技术,以生成智能家庭数据的自然语言解释,并比较其产生有意义解释的效果。通过对日常用户的研究,我们评估用户对四种解释类型选择的偏好。我们的结果显示,主要方法SHAP在准确解释方面成功率为92%。此外,84%的抽样假设用户更喜欢自然语言解释,而不是简单的活动标签,强调需要可解释的活动识别系统。为了做到这一点,我们引入四种计算方法,我们通过研究AVI的准确性研究方法来判断未来用户的正确度。我们学习了某种内部选择了某种选择的方法。我们研究方法,在研究其他方法,在研究中选择了一种最精确的方法。