We study the problem of estimating the partition function $Z(\beta) = \sum_{x \in \Omega} \exp[-\beta \cdot H(x)]$ of a Gibbs distribution defined by a Hamiltonian $H(\cdot)$. It is well known that the partition function $Z(\beta)$ can be well approximated by the simulated annealing method, assuming a sampling oracle that can generate samples according to the Gibbs distribution of any given inverse temperature $\beta$. This method yields the most efficient reductions from counting to sampling, including: $\bullet$ classic non-adaptive (parallel) algorithms with sub-optimal cost [DFK89; Bez+08]; $\bullet$ adaptive (sequential) algorithms with near-optimal cost [SVV09; Hub15; Kol18; HK23]. In this paper, we give an algorithm that achieves efficiency in both parallelism and total work. Specifically, it provides a reduction from counting to sampling using near-optimal total work and logarithmic depth of computation. Consequently, it gives work-efficient parallel counting algorithms for several important models, including the hardcore and Ising models in the uniqueness regime.
翻译:暂无翻译