Learning or estimating game models from data typically entails inducing separate models for each setting, even if the games are parametrically related. In empirical mechanism design, for example, this approach requires learning a new game model for each candidate setting of the mechanism parameter. Recent work has shown the data efficiency benefits of learning a single parameterized model for families of related games. In Bayesian games - a typical model for mechanism design - payoffs depend on both the actions and types of the players. We show how to exploit this structure by learning an interim game-family model that conditions on a single player's type. We compare this approach to the baseline approach of directly learning the ex ante payoff function, which gives payoffs in expectation of all player types. By marginalizing over player type, the interim model can also provide ex ante payoff predictions. This dual capability not only facilitates Bayes-Nash equilibrium approximation, but also enables new types of analysis using the conditional model. We validate our method through a case study of a dynamic sponsored search auction. In our experiments, the interim model more reliably approximates equilibria than the ex ante model and exhibits effective parameter extrapolation. With local search over the parameter space, the learned game-family model can be used for mechanism design. Finally, without any additional sample data, we leverage the interim model to compute piecewise best-response strategies and refine our model to incorporate these strategies, enabling an iterative approach to empirical mechanism design.
翻译:暂无翻译