The competitive auction was first proposed by Goldberg, Hartline, and Wright. In their paper, they introduce the competitive analysis framework of online algorithm designing into the traditional revenue-maximizing auction design problem. While the competitive analysis framework only cares about the worst-case bound, a growing body of work in the online algorithm community studies the learning-augmented framework. In this framework, designers are allowed to leverage imperfect machine-learned predictions of unknown information and pursue better theoretical guarantees when the prediction is accurate(consistency). Meanwhile, designers also need to maintain a nearly-optimal worst-case ratio(robustness). In this work, we revisit the competitive auctions in the learning-augmented setting. We leverage the imperfect predictions of the private value of the bidders and design the learning-augmented mechanisms for several competitive auctions with different constraints, including digital good auctions, limited-supply auctions, and general downward-closed permutation environments. For all these auction environments, our mechanisms enjoy $1$-consistency against the strongest benchmark $OPT$, which is impossible to achieve $O(1)$-competitive without predictions. At the same time, our mechanisms also maintain the $O(1)$-robustness against all benchmarks considered in the traditional competitive analysis. Considering the possible inaccuracy of the predictions, we provide a reduction that transforms our learning-augmented mechanisms into an error-tolerant version, which enables the learning-augmented mechanism to ensure satisfactory revenue in scenarios where the prediction error is moderate.
翻译:暂无翻译