We present an application of invariant polynomials in machine learning. Using the methods developed in previous work, we obtain two types of generators of the Lorentz- and permutation-invariant polynomials in particle momenta; minimal algebra generators and Hironaka decompositions. We discuss and prove some approximation theorems to make use of these invariant generators in machine learning algorithms in general and in neural networks specifically. By implementing these generators in neural networks applied to regression tasks, we test the improvements in performance under a wide range of hyperparameter choices and find a reduction of the loss on training data and a significant reduction of the loss on validation data. For a different approach on quantifying the performance of these neural networks, we treat the problem from a Bayesian inference perspective and employ nested sampling techniques to perform model comparison. Beyond a certain network size, we find that networks utilising Hironaka decompositions perform the best.
翻译:在机器学习中,我们采用机械学习中的异变多元体应用。使用先前工作开发的方法,我们在粒子瞬间获得两种类型的Lorentz和变异-异变多元体生成器;最小代数生成器和Hironaka分解器;我们讨论并证明一些近似理论,以便在一般的机器学习算法中和具体在神经网络中利用这些异变生成器。通过在用于回归任务的神经网络中应用这些生成器,我们测试在一系列超光谱选择下性能的改进,发现培训数据损失减少,验证数据损失大幅减少。对于量化这些神经网络性能的不同方法,我们从贝耶斯人的推论角度处理这一问题,并使用巢式取样技术进行模型比较。除了特定的网络规模外,我们发现利用Hironaka解构件的网络发挥最佳效果。