In this work, we introduce and study contextual search in general principal-agent games, where a principal repeatedly interacts with agents by offering contracts based on contextual information and historical feedback, without knowing the agents' true costs or rewards. Our model generalizes classical contextual pricing by accommodating richer agent action spaces. Over $T$ rounds with $d$-dimensional contexts, we establish an asymptotically tight exponential $T^{1 - \Theta(1/d)}$ bound in terms of the pessimistic Stackelberg regret, benchmarked against the best utility for the principal that is consistent with the observed feedback. We also establish a lower bound of $\Omega(T^{\frac{1}{2}-\frac{1}{2d}})$ on the classic Stackelberg regret for principal-agent games, demonstrating a surprising double-exponential hardness separation from the contextual pricing problem (a.k.a, the principal-agent game with two actions), which is known to admit a near-optimal $O(d\log\log T)$ regret bound [Kleinberg and Leighton, 2003, Leme and Schneider, 2018, Liu et al., 2021]. In particular, this double-exponential hardness separation occurs even in the special case with three actions and two-dimensional context. We identify that this significant increase in learning difficulty arises from a structural phenomenon that we call contextual action degeneracy, where adversarially chosen contexts can make some actions strictly dominated (and hence unincentivizable), blocking the principal's ability to explore or learn about them, and fundamentally limiting learning progress.
翻译:暂无翻译