Knapsack is one of the most fundamental problems in theoretical computer science. In the $(1 - \epsilon)$-approximation setting, although there is a fine-grained lower bound of $(n + 1 / \epsilon) ^ {2 - o(1)}$ based on the $(\min, +)$-convolution hypothesis ([K{\"u}nnemann, Paturi and Stefan Schneider, ICALP 2017] and [Cygan, Mucha, Wegrzycki and Wlodarczyk, 2017]), the best algorithm is randomized and runs in $\tilde O\left(n + (\frac{1}{\epsilon})^{11/5}/2^{\Omega(\sqrt{\log(1/\epsilon)})}\right)$ time [Deng, Jin and Mao, SODA 2023], and it remains an important open problem whether an algorithm with a running time that matches the lower bound (up to a sub-polynomial factor) exists. We answer the question positively by showing a deterministic $(1 - \epsilon)$-approximation scheme for knapsack that runs in $\tilde O(n + (1 / \epsilon) ^ {2})$ time. We first extend a known lemma in a recursive way to reduce the problem to $n \epsilon$-additive approximation for $n$ items with profits in $[1, 2)$. Then we give a simple efficient geometry-based algorithm for the reduced problem.
翻译:暂无翻译