We study how to utilize (possibly erroneous) predictions in a model for computing under uncertainty in which an algorithm can query unknown data. Our aim is to minimize the number of queries needed to solve the minimum spanning tree problem, a fundamental combinatorial optimization problem that has been central also to the research area of explorable uncertainty. For all integral $\gamma\ge 2$, we present algorithms that are $\gamma$-robust and $(1+\frac{1}{\gamma})$-consistent, meaning that they use at most $\gamma OPT$ queries if the predictions are arbitrarily wrong and at most $(1+\frac{1}{\gamma})OPT$ queries if the predictions are correct, where $OPT$ is the optimal number of queries for the given instance. Moreover, we show that this trade-off is best possible. Furthermore, we argue that a suitably defined hop distance is a useful measure for the amount of prediction error and design algorithms with performance guarantees that degrade smoothly with the hop distance. We also show that the predictions are PAC-learnable in our model. Our results demonstrate that untrusted predictions can circumvent the known lower bound of~$2$, without any degradation of the worst-case ratio. To obtain our results, we provide new structural insights for the minimum spanning tree problem that might be useful in the context of query-based algorithms regardless of predictions. In particular, we generalize the concept of witness sets -- the key to lower-bounding the optimum -- by proposing novel global witness set structures and completely new ways of adaptively using those.
翻译:我们研究如何在不确定情况下使用模型中的预测(可能错误),在这种模型中,算法可以查询未知的数据。我们的目标是尽量减少为解决最小横跨树的问题而需要的查询数量,这是根本的组合优化问题,也是探索不确定性研究领域的核心。对于所有整体的 $\ gamma\ge 2美元,我们展示的算法为$gamma$-robust 和$(1 ⁇ frac{1ungamma})$-constistic,这意味着如果预测是任意错误的,它们最多使用$\gamma OLAM$的查询。如果预测是完全错误的,最多是$(1 ⁇ frac{1ungamma}),那么我们的目标是尽量减少查询,如果预测是正确的,美元是最佳的查询次数。此外,我们展示了这种交易的最佳可能性。此外,我们说,一个定义适当的跳距离是有用的尺度,用来衡量预测错误的程度,并且设计有性保证与跳距离平稳的计算。我们还表明,特定的预测是完全错误的计算方法,我们所了解的模型中最差的、我们所了解的估价结果,我们所了解的模型中最差的估价结果,可以提供最差的精确的估价结果。