Differentially private weighted prefix sum under continual observation is a crucial component in the production-level deployment of private next-word prediction for Gboard, which, according to Google, has over a billion users. More specifically, Google uses a differentially private mechanism to sum weighted gradients in its \emph{private follow-the-regularized leader} algorithm. Apart from efficiency, the additive error of the private mechanism is crucial as multiplied with the square root of the model's dimension $d$ (with $d$ ranging up to $10$ trillion, for example, Switch Transformers or M6-10T), it determines the accuracy of the learning system. So, any improvement in leading constant matters significantly in practice. In this paper, we show a novel connection between mechanisms for continual weighted prefix sum and a concept in representation theory known as the group matrix introduced in correspondence between Dedekind and Frobenius (1897) and generalized by Schur (1904). To the best of our knowledge, this is the first application of group algebra to analyze differentially private algorithms. Using this connection, we analyze a class of matrix norms known as {\em factorization norms} that give upper and lower bounds for the additive error under general $\ell_p$-norms of the matrix mechanism. This allows us to give the first efficient factorization that matches the best-known non-constructive upper bound on the factorization norm by Mathias (1993) for the matrix used in Google's deployment and also improves on the previous best-known constructive bound of Fichtenberger et al. (ICML 2023) and Henzinger et al. (SODA 2023) and the first upper bound on the additive error for a large class of weight functions for weighted prefix sum problems, including the sliding window matrix (Bolot et al. (ICDT 2013).
翻译:暂无翻译