We propose a notation for tensors with named axes, which relieves the author, reader, and future implementers of machine learning models from the burden of keeping track of the order of axes and the purpose of each. The notation makes it easy to lift operations on low-order tensors to higher order ones, for example, from images to minibatches of images, or from an attention mechanism to multiple attention heads. After a brief overview and formal definition of the notation, we illustrate it through several examples from modern machine learning, from building blocks like attention and convolution to full models like Transformers and LeNet. We then discuss differential calculus in our notation and compare with some alternative notations. Our proposals build on ideas from many previous papers and software libraries. We hope that our notation will encourage more authors to use named tensors, resulting in clearer papers and more precise implementations.
翻译:我们建议用名为斧子的抗龙标记,让机器学习模型的作者、读者和未来实施者从跟踪轴的顺序和每个轴的目的的重担中解脱出机器学习模型的作者、阅读者和未来实施者。这个标记使得低序阵列的操作容易提升到更高的顺序,例如从图像到图像的小桶,或从关注机制到多重关注头部。在简单概述和正式界定了标记之后,我们通过现代机器学习的几个例子来说明这一点,从关注和变形器和LeNet等构件到完整的模型。然后我们讨论我们标记中的差别微分,并与一些替代的标记进行比较。我们的建议建立在以前许多文件和软件图书馆的想法之上。我们希望我们的标记将鼓励更多的作者使用命名的十进制,从而产生更清晰的文件和更精确的实施。