Recent deep learning (DL) models have moved beyond static network architectures to dynamic ones, handling data where the network structure changes every example, such as sequences of variable lengths, trees, and graphs. Existing dataflow-based programming models for DL---both static and dynamic declaration---either cannot readily express these dynamic models, or are inefficient due to repeated dataflow graph construction and processing, and difficulties in batched execution. We present Cavs, a vertex-centric programming interface and optimized system implementation for dynamic DL models. Cavs represents dynamic network structure as a static vertex function $\mathcal{F}$ and a dynamic instance-specific graph $\mathcal{G}$, and performs backpropagation by scheduling the execution of $\mathcal{F}$ following the dependencies in $\mathcal{G}$. Cavs bypasses expensive graph construction and preprocessing overhead, allows for the use of static graph optimization techniques on pre-defined operations in $\mathcal{F}$, and naturally exposes batched execution opportunities over different graphs. Experiments comparing Cavs to two state-of-the-art frameworks for dynamic NNs (TensorFlow Fold and DyNet) demonstrate the efficacy of this approach: Cavs achieves a near one order of magnitude speedup on training of various dynamic NN architectures, and ablations demonstrate the contribution of our proposed batching and memory management strategies.
翻译:最近的深层次学习模式( DL) 已经从静态网络结构转向动态网络结构, 处理网络结构改变每个示例的数据, 如变量长度、 树和图表的序列。 DL- 静态和动态声明- 现有的基于数据流的编程模型既不能轻易表达这些动态模型, 也由于数据流图的反复构建和处理以及分批执行方面的困难而效率低下。 我们展示了 Cavs, 一个以垂直为中心的编程界面, 以及动态 DL模型的优化系统实施。 Cavs 代表着动态网络结构, 是一个静态的顶端功能 $\ mathcal{ F} 和一个动态的图形, 并且自然地暴露了一个具体实例的图形( mathcal{G} ) 的配置执行机会 。 Cavs 对比了这个动态的 CNF 和 版本的演示框架 。