Use of Deep Learning (DL) in commercial applications such as image classification, sentiment analysis and speech recognition is increasing. When training DL models with large number of parameters and/or large datasets, cost and speed of training can become prohibitive. Distributed DL training solutions that split a training job into subtasks and execute them over multiple nodes can decrease training time. However, the cost of current solutions, built predominantly for cluster computing systems, can still be an issue. In contrast to cluster computing systems, Volunteer Computing (VC) systems can lower the cost of computing, but applications running on VC systems have to handle fault tolerance, variable network latency and heterogeneity of compute nodes, and the current solutions are not designed to do so. We design a distributed solution that can run DL training on a VC system by using a data parallel approach. We implement a novel asynchronous SGD scheme called VC-ASGD suited for VC systems. In contrast to traditional VC systems that lower cost by using untrustworthy volunteer devices, we lower cost by leveraging preemptible computing instances on commercial cloud platforms. By using preemptible instances that require applications to be fault tolerant, we lower cost by 70-90% and improve data security.
翻译:在图像分类、情绪分析和语音识别等商业应用中,深度学习(DL)的使用正在增加。当对具有大量参数和(或)大数据集、培训成本和速度的DL模型的培训可能变得令人望而却步时。将培训工作分成子任务并用多个节点执行的DL培训解决方案可能会减少培训时间。然而,目前主要为集束计算系统建造的解决方案的成本仍是一个问题。与集束计算系统相比,自愿计算(VC)系统可以降低计算成本,但是在VC系统运行的应用程序必须处理缺陷容忍度、可变网络延长和可变性,而当前解决方案的设计却无法做到这一点。我们设计了一个分散的解决方案,将培训工作分成一个子任务,将培训工作分成子任务分为子任务,并用数据平行的方法在VC系统上运行。我们实施了一个叫做VC-ASGD的无节点的新型SGD计划,这个方案适用于VC系统。与使用不可信的自愿装置降低成本的传统VC系统相比,我们通过在商业云层节点平台上利用可预防的计算程序来降低成本的成本。我们降低了成本。我们可以通过70的软件来改进安全。