Bidders in combinatorial auctions face significant challenges when describing their preferences to an auctioneer. Classical work on preference elicitation focuses on query-based techniques inspired from proper learning--often via proxies that interface between bidders and an auction mechanism--to incrementally learn bidder preferences as needed to compute efficient allocations. Although such elicitation mechanisms enjoy theoretical query efficiency, the amount of communication required may still be too cognitively taxing in practice. We propose a family of efficient LLM-based proxy designs for eliciting preferences from bidders using natural language. Our proposed mechanism combines LLM pipelines and DNF-proper-learning techniques to quickly approximate preferences when communication is limited. To validate our approach, we create a testing sandbox for elicitation mechanisms that communicate in natural language. In our experiments, our most promising LLM proxy design reaches approximately efficient outcomes with five times fewer queries than classical proper learning based elicitation mechanisms.
翻译:组合拍卖中的投标人在向拍卖方描述其偏好时面临重大挑战。经典的偏好诱导研究主要关注基于查询的技术,这些技术受精确学习启发——通常通过连接投标人与拍卖机制的代理——在需要计算有效分配时逐步学习投标人偏好。尽管此类诱导机制在理论上具有查询效率,但实际所需的通信量仍可能造成过高的认知负担。我们提出了一系列基于LLM的高效代理设计方案,通过自然语言从投标人处诱导偏好。我们提出的机制结合了LLM流水线与析取范式精确学习技术,在通信受限时快速近似偏好。为验证该方法,我们创建了支持自然语言通信的诱导机制测试沙箱。实验表明,我们最有前景的LLM代理设计方案以经典精确学习诱导机制五分之一的查询量,实现了近似有效的分配结果。