Normalization of polynomials plays an essential role in the approximate basis computation of vanishing ideals. In computer algebra, coefficient normalization, which normalizes a polynomial by its coefficient norm, is the most common method. In this study, we propose gradient-weighted normalization for the approximate border basis computation of vanishing ideals, inspired by the recent results in machine learning. The data-dependent nature of gradient-weighted normalization leads to powerful properties such as better stability against perturbation and consistency in the scaling of input points, which cannot be attained by the conventional coefficient normalization. With a slight modification, the analysis of algorithms with coefficient normalization still works with gradient-weighted normalization and the time complexity does not change. We also provide an upper bound on the coefficient norm based on the gradient-weighted norm, which allows us to discuss the approximate border bases with gradient-weighted normalization from the perspective of the coefficient norm.
翻译:在计算机代数中,系数正常化是最常见的方法。在本研究中,我们提议,在机器学习最近的结果的启发下,对消亡理想的大致边界计算基数进行梯度加权正常化。梯度加权正常化具有依赖数据的性质,因此具有强大的特性,例如,由于常规系数正常化无法达到的输入点的伸缩比重的稳定性和一致性得到更好的稳定。如果稍作修改,对系数正常化的算法的分析仍然与梯度加权正常化和时间复杂性没有变化。我们还根据梯度加权规范对系数标准作出上限限制,从而使我们能够从系数规范的角度讨论粗度标准化的粗度边界基数。