In the well-known complexity class NP are combinatorial problems, whose optimization counterparts are important for many practical settings. These problems typically consider full knowledge about the input. In practical settings, however, uncertainty in the input data is a usual phenomenon, whereby this is normally not covered in optimization versions of NP problems. One concept to model the uncertainty in the input data, is recoverable robustness. The instance of the recoverable robust version of a combinatorial problem P is split into a base scenario $\sigma_0$ and an uncertainty scenario set $\textsf{S}$. The base scenario and all members of the uncertainty scenario set are instances of the original combinatorial problem P. The task is to calculate a solution $s_0$ for the base scenario $\sigma_0$ and solutions $s$ for all uncertainty scenarios $\sigma \in \textsf{S}$ such that $s_0$ and $s$ are not too far away from each other according to a distance measure, so $s_0$ can be easily adapted to $s$. This paper introduces Hamming Distance Recoverable Robustness, in which solutions $s_0$ and $s$ have to be calculated, such that $s_0$ and $s$ may only differ in at most $\kappa$ elements. We survey the complexity of Hamming distance recoverable robust versions of optimization problems, typically found in NP for different scenario encodings. The complexity is primarily situated in the lower levels of the polynomial hierarchy. The main contribution of the paper is a gadget reduction framework that shows that the recoverable robust versions of problems in a large class of combinatorial problems is $\Sigma^P_{3}$-complete. This class includes problems such as Vertex Cover, Coloring or Subset Sum. Additionally, we expand the results to $\Sigma^P_{2m+1}$-completeness for multi-stage recoverable robust problems with $m \in \mathbb{N}$ stages.
翻译:暂无翻译