Modern mobile devices, although resourceful, cannot train state-of-the-art machine learning models without the assistance of servers, which require access to, potentially, privacy-sensitive user data. Split learning has recently emerged as a promising technique for training complex deep learning (DL) models on low-powered mobile devices. The core idea behind this technique is to train the sensitive layers of a DL model on mobile devices while offloading the computationally intensive layers to a server. Although a lot of works have already explored the effectiveness of split learning in simulated settings, a usable toolkit for this purpose does not exist. In this work, we highlight the theoretical and technical challenges that need to be resolved to develop a functional framework that trains ML models in mobile devices without transferring raw data to a server. Focusing on these challenges, we propose SplitEasy, a framework for training ML models on mobile devices using split learning. Using the abstraction provided by SplitEasy, developers can run various DL models under split learning setting by making minimal modifications. We provide a detailed explanation of SplitEasy and perform experiments with six state-of-the-art neural networks. We demonstrate how SplitEasy can train models that cannot be trained solely by a mobile device while incurring nearly constant time per data sample.
翻译:尽管现代移动设备资源丰富,但如果没有服务器的协助,现代移动设备无法培训最先进的机器学习模式,而服务器则需要获取潜在的隐私敏感用户数据。不同学习最近成为培训低功率移动设备复杂深层学习(DL)模型的一个很有希望的技术。这一技术的核心思想是培训移动设备DL模型的敏感层,同时将计算密集层卸载到服务器上。虽然许多工作已经探索了模拟环境中的分离学习的有效性,但没有用于此目的的实用工具箱。在这项工作中,我们强调需要解决的理论和技术挑战,以便建立一个功能框架,在移动设备中培训ML模型,而无需将原始数据转移到服务器上。我们针对这些挑战,提议SpliteEasy是使用分离学习对移动设备ML模型进行培训的框架。利用SpliteEasy提供的抽象数据,开发者可以通过最小的修改来运行多种DL模型。我们详细解释SpletEasy,并用六个状态的神经网络进行实验,而需要解决的理论和技术挑战。我们建议如何在几乎连续的模型中进行演示。