Lorentz-equivariant neural networks are becoming the leading architectures for high-energy physics. Current implementations rely on specialized layers, limiting architectural choices. We introduce Lorentz Local Canonicalization (LLoCa), a general framework that renders any backbone network exactly Lorentz-equivariant. Using equivariantly predicted local reference frames, we construct LLoCa-transformers and graph networks. We adapt a recent approach for geometric message passing to the non-compact Lorentz group, allowing propagation of space-time tensorial features. Data augmentation emerges from LLoCa as a special choice of reference frame. Our models achieve competitive and state-of-the-art accuracy on relevant particle physics tasks, while being $4\times$ faster and using $10\times$ fewer FLOPs.
翻译:洛伦兹等变神经网络正逐渐成为高能物理领域的主导架构。现有实现依赖于专用层,限制了架构选择。本文提出洛伦兹局部正则化(LLoCa)——一种通用框架,可使任何骨干网络精确实现洛伦兹等变性。通过利用等变性预测的局部参考系,我们构建了LLoCa-Transformer和图网络。我们将近期几何消息传递方法适配至非紧致洛伦兹群,实现了时空张量特征的传播。数据增强在LLoCa框架中可作为参考系选择的特例出现。我们的模型在相关粒子物理任务中取得了具有竞争力且最先进的精度,同时实现$4\times$的加速和$10\times$的浮点运算量降低。