This paper proposes a standard way to represent sparse tensors. A broad theoretical framework for tensor data scattering methods used in various deep learning frameworks is established. This paper presents a theorem that is very important for performance analysis and accelerator optimization for implementing data scattering. The theorem shows how the impossibility of slicing happens in tenser data scattering. A sparsity measuring formula is provided, which can effectively indicate the storage efficiency of sparse tensor and the possibility of parallelly using it. The source code, including CUDA code, is provided in a related open-source project.
翻译:本文提出了一种代表稀散的散射器的标准方法,为各种深层学习框架中使用的散射方法建立了一个广泛的理论框架,提出了一种对于执行数据散射的绩效分析和加速器优化非常重要的理论。理论说明了在散射的拉动数据中不可能发生剪切的情况。提供了宽度测量公式,这可以有效地表明稀散的散射器的储存效率以及平行使用该方法的可能性。源代码,包括CUDA代码,在一个相关的开放源码项目中提供。