The saddlepoint approximation gives an approximation to the density of a random variable in terms of its moment generating function. When the underlying random variable is itself the sum of $n$ unobserved i.i.d.\ terms, the basic classical result is that the relative error in the density is of order $1/n$. If instead the approximation is interpreted as a likelihood and maximised as a function of model parameters, the result is an approximation to the maximum likelihood estimate (MLE) that can be much faster to compute than the true MLE. This paper proves the analogous basic result for the approximation error between the saddlepoint MLE and the true MLE: subject to certain explicit identifiability conditions, the error has asymptotic size $O(1/n^2)$ for some parameters, and $O(1/n^{3/2})$ or $O(1/n)$ for others. In all three cases, the approximation errors are asymptotically negligible compared to the inferential uncertainty. The proof is based on a factorisation of the saddlepoint likelihood into an exact and approximate term, along with an analysis of the approximation error in the gradient of the log-likelihood. This factorisation also gives insight into alternatives to the saddlepoint approximation, including a new and simpler saddlepoint approximation, for which we derive analogous error bounds. As a corollary of our results, we also obtain the asymptotic size of the MLE error approximation when the saddlepoint approximation is replaced by the normal approximation.
翻译:马鞍近似值近似于随机变量的密度, 以其生成时的函数值计算。 当基本随机变量本身是未观测到的 i. d.\ 条件的总和时, 基本的经典结果是密度的相对差值为 $/ n美元。 如果将近差值解释为可能性和最大值作为模型参数的函数函数值, 结果是接近于最大概率估计值( MLE) 的近似值, 这比真实 MLE 的不确定性要快得多。 本文证明, 顶点 MLE 和真正的 MLE 之间的近差差差差差差差相似的基本结果: 取决于某些明确的可识别性条件, 差差差差差差差值为 $( 1/ n% 2) 的相对误差值为 1 / 美元 。 如果将近差值误判为模型参数的概率值, 则近差差差值比真实的 MLE 。 证明的依据是, 将马鞍概率概率的因子化为精确和近似值的期, 将比值值值值值值值值值值值值值值值分析为我们的正值的比值的正值, 。