Much of the success of deep learning is drawn from building architectures that properly respect underlying symmetry and structure in the data on which they operate - a set of considerations that have been united under the banner of geometric deep learning. Often problems in the physical sciences deal with relatively small sets of points in two- or three-dimensional space wherein translation, rotation, and permutation equivariance are important or even vital for models to be useful in practice. In this work, we present rotation- and permutation-equivariant architectures for deep learning on these small point clouds, composed of a set of products of terms from the geometric algebra and reductions over those products using an attention mechanism. The geometric algebra provides valuable mathematical structure by which to combine vector, scalar, and other types of geometric inputs in a systematic way to account for rotation invariance or covariance, while attention yields a powerful way to impose permutation equivariance. We demonstrate the usefulness of these architectures by training models to solve sample problems relevant to physics, chemistry, and biology.
翻译:深层学习的成功主要来自建筑结构,这些建筑适当尊重其运行数据中的基本对称和结构,这是一套在几何深学习的旗帜下统一起来的考虑因素。物理科学中的问题往往涉及两维或三维空间中相对小的几组点,其中翻译、旋转和变异等同很重要,甚至对于模型在实践中有用至关重要。在这项工作中,我们展示了对这些小点云进行深层学习的旋转和变异结构,其中包括几何代数的一组术语产品,以及利用关注机制减少这些产品。几何代数提供了宝贵的数学结构,可以系统地将矢量、弧和其他类型的几何学投入结合起来,以计算变化或共变异性,而注意力则产生一种强有力的方法,把变异性强加于人。我们通过培训模型来证明这些结构的实用性,以便解决与物理、化学和生物学有关的样本问题。