Machine learning is used for inference and decision making in wearable sensor systems. However, recent studies have found that machine learning algorithms are easily fooled by the addition of adversarial perturbations to their inputs. What is more interesting is that adversarial examples generated for one machine learning system is also effective against other systems. This property of adversarial examples is called transferability. In this work, we take the first stride in studying adversarial transferability in wearable sensor systems from the following perspectives: 1) transferability between machine learning systems, 2) transferability across subjects, 3) transferability across sensor body locations, and 4) transferability across datasets. We found strong untargeted transferability in most cases. Targeted attacks were less successful with success scores from $0\%$ to $80\%$. The transferability of adversarial examples depends on many factors such as the inclusion of data from all subjects, sensor body position, number of samples in the dataset, type of learning algorithm, and the distribution of source and target system dataset. The transferability of adversarial examples decreases sharply when the data distribution of the source and target system becomes more distinct. We also provide guidelines for the community for designing robust sensor systems.
翻译:在可磨损的传感器系统中,机器学习用于推断和决策。然而,最近的研究发现,机器学习算法很容易被其投入中增加的对抗性扰动所蒙骗。更有趣的是,为一个机器学习系统产生的对抗性实例对其他系统也有效。这种对抗性实例的特性被称为可转移性。在这项工作中,我们首先从以下角度研究可磨损的传感器系统中的对抗性可转移性:1)机器学习系统之间的可转移性,2)学科之间的可转移性,3)感应体之间的可转移性,3)感应体位置之间的可转移性,4)数据集之间的可转移性。我们发现在大多数情况下,目标攻击不那么成功,成功得分从0美元到80美元不等。对抗性例子的可转移性取决于许多因素,例如从所有主体、感应体位置、数据集中的样本数量、学习算法类型以及源和目标系统数据集的分布。当源和目标系统的数据分布变得更为明确时,对抗性例子的可转移性急剧下降。我们还为社区设计稳健的传感器提供了指导方针。