Goemans and Rothvoss (SODA'14) gave a framework for solving problems in time $enc(P)^{2^{O(N)}}enc(Q)^{O(1)}$ that can be described as finding a point in $\text{int.cone}(P\cap\mathbb{Z}^N)\cap Q$, where $P,Q\subset\mathbb{R}^N$ are (bounded) polyhedra. This framework can be used to solve various scheduling problems, but the encoding length $enc(P)$ usually involves large parameters like the makespan. We describe three tools to improve the framework by Goemans and Rothvoss: Problem-specific preprocessing, LP relaxation techniques and a new bound for the number of vertices of the integer hull. In particular, applied to the classical scheduling problem $P||C_{\max}$, these tools each improve the running time from $(\log(C_{\max}))^{2^{O(d)}} enc(I)^{O(1)}$ to the possibly much better $(\log(p_{\max}))^{2^{O(d)}}enc(I)^{O(1)}$. Here, $p_{\max}$ is the largest processing time, $d$ is the number of different processing times, $C_{\max}$ is the makespan and $enc(I)$ is the encoding length of the instance. This running time is FPT w.r.t. parameter $d$ if $p_{\max}$ is given in unary. We obtain similar results for various other problems. Moreover, we show how a balancing result by Govzmann et al. can be used to speed up an additive approximation scheme by Buchem et al. (ICALP'21) in the high-multiplicity setting. On the complexity side, we use reductions from the literature to provide new parameterized lower bounds for $P||C_{\max}$ and to show that the improved running time of the additive approximation algorithm is probably optimal. Finally, we show that the big open question asked by Mnich and van Bevern (Comput. Oper. Res. '18) whether $P||C_{\max}$ is FPT w.r.t. the number of job types $d$ has the same answer as the question whether $Q||C_{\max}$ is FPT w.r.t. the number of job and machine types $d+\tau$ (all in high-multiplicity encoding). The same holds for objective $C_{\min}$.
翻译:暂无翻译