Newton-step approximations to pseudo maximum likelihood estimates of spatial autoregressive models with a large number of parameters are examined, in the sense that the parameter space grows slowly as a function of sample size. These have the same asymptotic efficiency properties as maximum likelihood under Gaussianity but are of closed form. Hence they are computationally simple and free from compactness assumptions, thereby avoiding two notorious pitfalls of implicitly defined estimates of large spatial autoregressions. For an initial least squares estimate, the Newton step can also lead to weaker regularity conditions for a central limit theorem than those extant in the literature. A simulation study demonstrates excellent finite sample gains from Newton iterations, especially in large multiparameter models for which grid search is costly. A small empirical illustration shows improvements in estimation precision with real data.
翻译:对具有大量参数的空间自动递减模型的假最大概率估计的牛顿步骤近似值进行了研究,其含义是,参数空间随着样本大小的函数而缓慢地增长,这些参数空间具有与高斯度下最大可能性相同的微量效率特性,这些特性与高斯度下的最大可能性相同,但却是封闭式的。因此,它们计算简单,不受紧凑性假设的影响,从而避免了隐含定义的大空间自动递减估计的两处臭名昭著的陷阱。对于最初的最小方形估计来说,牛顿步骤还可能导致一个中央限值的常规性条件比文献中的现有值差。一项模拟研究显示,从牛顿的迭代中,特别是在大型多参数模型中,由于电网搜索成本很高,获得了极好的有限样本收益。一个小的实验性说明表明,用真实数据来估计精确度的改进了。