Rapid progress in deep learning is leading to a diverse set of quickly changing models, with a dramatically growing demand for compute. However, as frameworks specialize performance optimization to patterns in popular networks, they implicitly constrain novel and diverse models that drive progress in research. We empower deep learning researchers by defining a flexible and user-customizable pipeline for optimizing training of arbitrary deep neural networks, based on data movement minimization. The pipeline begins with standard networks in PyTorch or ONNX and transforms computation through progressive lowering. We define four levels of general-purpose transformations, from local intra-operator optimizations to global data movement reduction. These operate on a data-centric graph intermediate representation that expresses computation and data movement at all levels of abstraction, including expanding basic operators such as convolutions to their underlying computations. Central to the design is the interactive and introspectable nature of the pipeline. Every part is extensible through a Python API, and can be tuned interactively using a GUI. We demonstrate competitive performance or speedups on ten different networks, with interactive optimizations discovering new opportunities in EfficientNet.
翻译:深层学习的快速进展导致了一系列变化迅速的模式,对计算的需求急剧增加。然而,作为将绩效优化专门化到流行网络模式的框架,这些框架暗含限制推动研究进步的新颖和多样化模式。我们通过界定一个灵活和用户可接受的管道,赋予深层学习研究人员权力,以优化基于数据流动的任意深层神经网络培训,最大限度地减少数据流动。管道从PyTorrch或ONNX的标准网络开始,通过逐步降低来转换计算。我们定义了一般用途转变的四个级别,从当地操作者内部优化到全球数据流动减少。这些以数据为中心的图表为中间代表,显示所有层次的抽象的计算和数据流动,包括扩大基本操作者,如其基本计算过程。设计的核心是管道的互动和可追溯性。每个部分都可以通过Python API进行扩展,并且可以使用一个图形界面进行互动调整。我们展示了10个不同网络的竞争性性能或加速度,同时通过互动优化在节能网络中发现新的机会。