Machine Learning (ML)-powered apps are used in pervasive devices such as phones, tablets, smartwatches and IoT devices. Recent advances in collaborative, distributed ML such as Federated Learning (FL) attempt to solve privacy concerns of users and data owners, and thus used by tech industry leaders such as Google, Facebook and Apple. However, FL systems and models are still vulnerable to adversarial membership and attribute inferences and model poisoning attacks, especially in FL-as-a-Service ecosystems recently proposed, which can enable attackers to access multiple ML-powered apps. In this work, we focus on the recently proposed Sponge attack: It is designed to soak up energy consumed while executing inference (not training) of ML model, without hampering the classifier's performance. Recent work has shown sponge attacks on ASCI-enabled GPUs can potentially escalate the power consumption and inference time. For the first time, in this work, we investigate this attack in the mobile setting and measure the effect it can have on ML models running inside apps on mobile devices.
翻译:在诸如电话、平板电脑、智能观察器和IoT装置等普遍装置中使用机器学习(ML)动力应用程序; 最近在诸如Federal Learning(FL)等协作、分布式ML(FL)努力解决用户和数据拥有者的隐私问题,并因此被Google、Facebook和苹果等技术行业领导人使用; 然而,FL系统和模型仍然容易成为对抗性成员,并造成推论和模式中毒攻击,特别是在最近提议的FL-as-A-Service生态系统中,这可以使攻击者获得多个ML动力应用程序。 在这项工作中,我们把重点放在最近提出的Sponge攻击上:它旨在吸收消耗的能量,同时执行ML模型的推论(而不是培训),同时不妨碍Googleer的性能。 最近的工作显示,对ASCI驱动的GPUs的海绵进行攻击有可能增加电力消耗和推导时间。我们第一次在移动设置中调查这一攻击,并测量它对移动装置内程序内ML模型可能产生的影响。</s>