In their seminal paper that initiated the field of algorithmic mechanism design, \citet{NR99} studied the problem of designing strategyproof mechanisms for scheduling jobs on unrelated machines aiming to minimize the makespan. They provided a strategyproof mechanism that achieves an $n$-approximation and they made the bold conjecture that this is the best approximation achievable by any deterministic strategyproof scheduling mechanism. After more than two decades and several efforts, $n$ remains the best known approximation and very recent work by \citet{CKK21} has been able to prove an $\Omega(\sqrt{n})$ approximation lower bound for all deterministic strategyproof mechanisms. This strong negative result, however, heavily depends on the fact that the performance of these mechanisms is evaluated using worst-case analysis. To overcome such overly pessimistic, and often uninformative, worst-case bounds, a surge of recent work has focused on the ``learning-augmented framework'', whose goal is to leverage machine-learned predictions to obtain improved approximations when these predictions are accurate (consistency), while also achieving near-optimal worst-case approximations even when the predictions are arbitrarily wrong (robustness). In this work, we study the classic strategic scheduling problem of~\citet{NR99} using the learning-augmented framework and give a deterministic polynomial-time strategyproof mechanism that is $6$-consistent and $2n$-robust. We thus achieve the ``best of both worlds'': an $O(1)$ consistency and an $O(n)$ robustness that asymptotically matches the best-known approximation. We then extend this result to provide more general worst-case approximation guarantees as a function of the prediction error. Finally, we complement our positive results by showing that any $1$-consistent deterministic strategyproof mechanism has unbounded robustness.
翻译:在启动算法机制设计领域的开创性论文中,\citet{NR99}研究了设计用于将工作安排在不相关机器上的战略防偏差机制的问题,目的是最大限度地减少变相。它们提供了一种战略防偏差机制,实现了美元对准,并且他们大胆地推测这是任何确定性战略防偏差的排程机制所能实现的最佳近似。经过20多年的努力和一些努力,美元仍然是已知的最佳近似值,也是由citet{Cmoptrict{Crock21}最近的一项工作,它能够证明美元是用于将工作排在不相关机器上,美元对确定性战略防偏差机制的近似差。然而,这种强烈的负差结果在很大程度上取决于这些机制的业绩是通过最坏的分析来评估的。要克服这种过于悲观性,而且往往缺乏说服力的、最坏的情况,最近的工作集中在“学习-变现框架”上,其目标是利用机器的预测,甚至更接近的变现的美元战略,当这些预测是准确的预测时,也能够实现最坏的变现的变现结果。