Auto-scheduling for tensor programs is a process where a search algorithm automatically explores candidate schedules (program transformations) for a given program on a target hardware platform to improve its performance. However this can be a very time consuming process depending on the complexity of the tensor program and the capacity of the target device, with often many thousands of program variants being explored. To address this, in this paper we introduce the idea of transfer-tuning, a novel approach to identify and reuse auto-schedules between tensor programs. We demonstrate this concept using Deep Neural Networks (DNNs), taking sets of auto-schedules from pre-tuned DNNs and using them to reduce the inference time of a new DNN. We compare transfer-tuning against the state-of-the-art Ansor auto-scheduler, defining the maximum possible speedup for a given DNN model as what Ansor achieves using its recommended full tuning time. On a server-class CPU and across 11 widely used DNN models, we observe that transfer-tuning achieves up to $88.41\%$ ($49.13\%$ on average) of this maximum speedup, while Ansor requires $6.5\times$ more search time on average to match it. We also evaluate transfer-tuning on a constrained edge CPU and observe that the differences in search time are exacerbated, with Ansor requiring $10.8\times$ more time on average to match transfer-tuning's speedup, which further demonstrates its value. Our code is available at https://www.github.com/gicLAB/transfer-tuning
翻译:智能程序自动调整是一个过程, 搜索算法会自动探索目标硬件平台上某个程序的候选人时间表( 程序转换), 以提高其性能。 然而, 这可能是一个非常耗时的过程, 取决于感光程序的复杂性和目标设备的能力, 通常会探索成千上万个程序变体。 为了解决这个问题, 我们在本文件中引入了调换概念, 这是一种新颖的方法, 用以识别和再利用 感光程序之间的自动安排。 我们使用深神经网络( DNNS) 来展示这个概念, 从预调 DNNS 中提取自动时间表, 并使用它们来减少新 DNNN的推算时间。 我们比较调用最先进的 Ansor 模型的调试模式, 使用建议的全部调时, 使用服务器级 CPU 和 11 广泛使用的 DNNW 模型, 我们观察到调试达到 88. 41. 8美元 美元 (4. 13美元) 的自动变速时间, 还要用我们的平均变速速度来评估 。