Our work addresses the well-known open problem of distributed computing of bilinear functions of two correlated sources ${\bf A}$ and ${\bf B}$. In a setting with two nodes, with the first node having access to ${\bf A}$ and the second to ${\bf B}$, we establish bounds on the optimal sum-rate that allows a receiver to compute an important class of non-linear functions, and in particular bilinear functions, including dot products $\langle {\bf A},{\bf B}\rangle$, and general matrix products ${\bf A}^{\intercal}{\bf B}$ over finite fields. The bounds are tight, for large field sizes, for which case we can derive the exact fundamental performance limits for all problem dimensions and a large class of sources. Our achievability scheme involves the design of non-linear transformations of ${\bf A}$ and ${\bf B}$, which are carefully calibrated to work synergistically with the structured linear encoding scheme by K\"orner and Marton. The subsequent converse derived here, calibrates the Han-Kobayashi approach to yield a relatively tight converse on the sum rate. We also demonstrate unbounded compression gains over Slepian-Wolf coding, depending on the source correlations. In the end, our work derives fundamental limits for distributed computing of a crucial class of functions, succinctly capturing the computation structures and source correlations. Our findings are subsequently applied to the practical master-workers-receiver framework, where each of $N$ distributed workers has a bounded memory reflecting a bounded computational capability. By combining the above scheme with the polynomial code framework, we design novel structured polynomial codes for distributed matrix multiplication, and show that our codes can surpass the performance of the existing state of art, while also adapting these new codes to support chain matrix multiplications and information-theoretically secure computations.
翻译:暂无翻译