RegretNet is a recent breakthrough in the automated design of revenue-maximizing auctions. It combines the expressivity of deep learning with the regret-based approach to relax the Incentive Compatibility constraint (that participants benefit from bidding truthfully). We propose two independent modifications of RegretNet, namely a neural architecture based on the attention mechanism, denoted as RegretFormer, and an interpretable loss function that is significantly less sensitive to hyperparameters. We investigate both proposed modifications in an extensive experimental study that includes settings with constant and varied number of items and participants, novel validation procedures, and out-of-setting generalization. We find that RegretFormer consistently outperforms existing architectures in revenue and, unlike existing architectures, is applicable when the input size is variable. Regarding our loss modification, we confirm its effectiveness in controlling the revenue-regret trade-off by varying a single interpretable hyperparameter.
翻译:我们建议对RegretNet进行两项独立的修改,即以关注机制为基础的神经结构,称为Regret Former, 以及一种对超参数远不那么敏感的可解释损失功能。我们调查了在一项广泛的实验研究中提出的修改建议,该研究包括了具有固定和不同数量的项目和参与者的设置,新的验证程序,以及确定外的概括性。我们发现RegretFormer一贯地超越现有的收入结构,而与现有结构不同,在投入规模可变时,它的适用性与现有结构不同。关于我们的损失修改,我们确认它通过不同的单一可解释的超参数控制收入-调整交易的有效性。