We develop an asymptotic theory of adversarial estimators ('A-estimators'). They generalize maximum-likelihood-type estimators ('M-estimators') as their objective is maximized by some parameters and minimized by others. This class subsumes the continuous-updating Generalized Method of Moments, Generative Adversarial Networks and more recent proposals in machine learning and econometrics. In these examples, researchers state which aspects of the problem may in principle be used for estimation, and an adversary learns how to emphasize them optimally. We derive the convergence rates of A-estimators under pointwise and partial identification, and the normality of functionals of their parameters. Unknown functions may be approximated via sieves such as deep neural networks, for which we provide simplified low-level conditions. As a corollary, we obtain the normality of neural-net M-estimators, overcoming technical issues previously identified by the literature. Our theory yields novel results about a variety of A-estimators, providing intuition and formal justification for their success in recent applications.
翻译:我们开发了一种对抗性估计论(“ A- 估计者 ” ) 的无症状理论。 它们将最大相似性类估计论( “ M- 估计者 ” ) 普遍化, 因为它们的目标被某些参数最大化, 并被其他参数缩小。 这个班子将持续更新的通用的动态方法、 产生反反向网络 和最近关于机器学习和计量经济的建议相合并。 在这些例子中, 研究人员将问题的各个方面原则上用于估算, 并学习如何最好地强调它们。 我们通过点和部分识别, 以及其参数的正常功能, 得出了A- 估计者的趋同率。 未知的功能可能是通过深神经网络( 例如我们提供简化的低级条件) 。 作为推论, 我们获得了神经网 M- 估计器的正常性, 克服了文献先前确定的技术问题。 我们的理论产生了关于A- 估计者的各种新发现的结果, 提供了他们最近应用的成功的直观和正式理由。