Given functions $f$ and $g$ defined on the subset lattice of order $n$, their min-sum subset convolution, defined for all $S \subseteq [n]$ as \[ (f \star g)(S) = \min_{T \subseteq S}\:\big(f(T) + g(S \setminus T)\big), \] lies at the heart of several NP-hard optimization problems, such as minimum-cost $k$-coloring, the prize-collecting Steiner tree, and many others in computational biology. Despite its importance, its na\"ive $O(3^n)$-time evaluation remains the fastest known, the other alternative being an $\tilde O(2^n M)$-time algorithm for instances where the input functions have a bounded integer range $\{-M, \ldots, M\}$. We study for the first time the $(1 + \varepsilon)$-approximate min-sum subset convolution and present both a weakly- and strongly-polynomial approximation algorithm, running in time $\tilde O(2^n \log M / \varepsilon)$ and $\tilde O(2^\frac{3n}{2} / \sqrt{\varepsilon})$, respectively. To demonstrate the applicability of our work, we present the first exponential-time $(1 + \varepsilon)$-approximation schemes for the above optimization problems. Our algorithms lie at the intersection of two lines of research that have been so far considered separately: $\textit{sequence}$ and $\textit{subset}$ convolutions in semi-rings. We also extend the recent framework of Bringmann, K\"unnemann, and W\k{e}grzycki [STOC 2019] to the context of subset convolutions.
翻译:暂无翻译