Optimizing deep learning models is generally performed in two steps: (i) high-level graph optimizations such as kernel fusion and (ii) low level kernel optimizations such as those found in vendor libraries. This approach often leaves significant performance on the table, especially for the case of recursive deep learning models. In this paper, we present Cortex, a compiler-based approach to generate highly-efficient code for recursive models for low latency inference. Our compiler approach and low reliance on vendor libraries enables us to perform end-to-end optimizations, leading to up to 14X lower inference latencies over past work, across different backends.
翻译:优化深层学习模式通常分两步进行:(一) 高水平图形优化,如内核聚变;(二) 低水平内核优化,如供应商图书馆的优化,这种做法往往使表面上留下显著的绩效,特别是循环深层学习模式。本文介绍Cortex,这是一个基于汇编者的方法,为低潜伏推断的循环模型生成高效代码。我们的编译者方法和对供应商图书馆的低依赖使我们能够进行端到端优化,导致不同后端对过去工作的推论延迟达14X倍。