Recently, deep learning has been an area of intense research. However, as a kind of computing-intensive task, deep learning highly relies on the scale of GPU memory, which is usually prohibitive and scarce. Although there are some extensive works have been proposed for dynamic GPU memory management, they are hard to be applied to systems with multiple dynamic workloads, such as in-database machine learning systems. In this paper, we demonstrated TENSILE, a method of managing GPU memory in tensor granularity to reduce the GPU memory peak, considering the multiple dynamic workloads. TENSILE tackled the cold-starting and across-iteration scheduling problem existing in previous works. We implement TENSILE on a deep learning framework built by ourselves and evaluated its performance. The experiment results show that TENSILE can save more GPU memory with less extra time overhead than prior works in both single and multiple dynamic workloads scenarios.
翻译:最近,深层次的学习成为一项密集的研究领域。然而,作为一种计算密集型的任务,深层次的学习高度依赖于GPU记忆的规模,而GPU记忆通常令人望而却步。虽然为动态的GPU记忆管理提议了一些广泛的工程,但很难应用于具有多种动态工作量的系统,如数据库中的机器学习系统。在本文中,我们演示了TENSILE,这是一种在微粒中管理GPU记忆的方法,以降低GPU记忆高峰,考虑到多种动态工作量。 TENSILE解决了以往工作中存在的冷启动和跨时代的时间安排问题。我们用我们自己建立的深层次学习框架来实施TENSILE,并评估其绩效。实验结果表明,在单一和多动态的工作量假设中,TENSILE可以比以前的工作节省更多的GPUPU存储时间。