We develop an asymptotic theory of adversarial estimators ('A-estimators'). They generalize maximum-likelihood-type estimators ('M-estimators') as their average objective is maximized by some parameters and minimized by others. This class subsumes the continuous-updating Generalized Method of Moments, Generative Adversarial Networks and more recent proposals in machine learning and econometrics. In these examples, researchers state which aspects of the problem may in principle be used for estimation, and an adversary learns how to emphasize them optimally. We derive the convergence rates of A-estimators under pointwise and partial identification, and the normality of functionals of their parameters. Unknown functions may be approximated via sieves such as deep neural networks, for which we provide simplified low-level conditions. As a corollary, we obtain the normality of neural-net M-estimators, overcoming technical issues previously identified by the literature. Our theory yields novel results about a variety of A-estimators, providing intuition and formal justification for their success in recent applications.
翻译:我们开发了一种对抗性估计论(“ A- 估计者 ” ) 的无症状理论。 它们将最大相似性类估计论( “ M- 估计者 ” ) 普遍化, 因为它们的平均目标被某些参数最大化, 并被其他参数缩小。 这个班子将连续更新的通用的动态方法、 创意反逆网络 和最近关于机器学习和计量的建议相归为一体。 在这些例子中, 研究人员将问题的各个方面原则上用于估算, 并学习如何最好地强调它们。 我们通过点度和部分识别, 以及其参数的正常性, 得出A- 估计者的趋同率。 未知的功能可能是通过深层神经网络( 例如我们提供简化的低水平条件) 相近的。 作为推理, 我们获得了神经网 M- 估计器的正常性, 克服了文献先前发现的技术问题。 我们的理论产生了关于A- 测量者的各种新发现的结果, 提供了他们最近应用的成功的直观和正式理由。