Maximum likelihood (ML) estimation is widely used in statistics. The h-likelihood has been proposed as an extension of Fisher's likelihood to statistical models including unobserved latent variables of recent interest. Its advantage is that the joint maximization gives ML estimators (MLEs) of both fixed and random parameters with their standard error estimates. However, the current h-likelihood approach does not allow MLEs of variance components as Henderson's joint likelihood does not in linear mixed models. In this paper, we show how to form the h-likelihood in order to facilitate joint maximization for MLEs of whole parameters. We also show the role of the Jacobian term which allows MLEs in the presence of unobserved latent variables. To obtain MLEs for fixed parameters, intractable integration is not necessary. As an illustration, we show one-shot ML imputation for missing data by treating them as realized but unobserved random parameters. We show that the h-likelihood bypasses the expectation step in the expectation-maximization (EM) algorithm and allows single ML imputation instead of multiple imputations. We also discuss the difference in predictions in random effects and missing data.
翻译:最大可能性( ML) 估计在统计中被广泛使用。 h- 可能性被提议作为Fisher对统计模型的可能性的延伸, 包括最近感兴趣的未观测到的潜在变量。 其优点是, 联合最大化使固定参数和随机参数的 ML 估计值( MLEs) 具有标准误差估计值。 但是, 目前的 h- 可能性方法不允许差异部分的 MLE 具有差异值, 因为Henderson 的共同可能性并不存在于线性混合模型中。 在本文中, 我们展示了如何形成 h- 可能性, 以促进整个参数的 MLE 联合最大化。 我们还展示了 Jacobian 术语的作用, 该术语允许 MLE 在未观测到的潜在变量存在的情况下进行 MLE 。 要获得固定参数的 MLE, 则没有必要使用混杂的集法整合 。 举例来说, 我们通过将数据视为已实现但无法观测到的随机参数来显示缺失数据 。 我们显示 h- 将预期值绕过整个参数中的预期步骤, 并允许单 mL 插入 。