The saddlepoint approximation gives an approximation to the density of a random variable in terms of its moment generating function. When the underlying random variable is itself the sum of $n$ unobserved i.i.d. terms, the basic classical result is that the relative error in the density is of order $1/n$. If instead the approximation is interpreted as a likelihood and maximised as a function of model parameters, the result is an approximation to the maximum likelihood estimate (MLE) that can be much faster to compute than the true MLE. This paper proves the analogous basic result for the approximation error between the saddlepoint MLE and the true MLE: subject to certain explicit identifiability conditions, the error has asymptotic size $O(1/n^2)$ for some parameters, and $O(1/n^{3/2})$ or $O(1/n)$ for others. In all three cases, the approximation errors are asymptotically negligible compared to the inferential uncertainty. The proof is based on a factorisation of the saddlepoint likelihood into an exact and approximate term, along with an analysis of the approximation error in the gradient of the log-likelihood. This factorisation also gives insight into alternatives to the saddlepoint approximation, including a new and simpler saddlepoint approximation, for which we derive analogous error bounds. As a corollary of our results, we also obtain the asymptotic size of the MLE error approximation when the saddlepoint approximation is replaced by the normal approximation.
翻译:马鞍近似值近似于随机变量的密度, 以其生成时的函数值计算。 当基本随机变量本身是未观测到的 i. d. 条件的总额时, 基本古典结果是, 密度的相对误差为 $/ n. 美元。 如果将近差解释为一种可能性, 并最大化为模型参数的函数函数, 其结果是接近于最大可能性估计值( MLE), 其计算速度比真实 MLE 的不确定性要快得多。 本文证明, 质点 MLE 和真正的 MLE 之间的近差差差差差差差差差相似的基本结果: 取决于某些明确的可识别性条件, 差差差差差差差差差差差差值为 $O( 1/ n% 2) 美元, 或 $O (1/ n) 3/2} 或 $O( 1/ n) 美元 。 在全部三种情况下, 近差差差差差差差差差差差差, 与真实的不确定性。 。 证明 将马鞍概率概率概率概率概率概率概率概率概率概率变为精确概率为准确值, 。