Maximum likelihood estimation in logistic regression with mixed effects is known to often result in estimates on the boundary of the parameter space. Such estimates, which include infinite values for fixed effects and singular or infinite variance components, can cause havoc to numerical estimation procedures and inference. We introduce an appropriately scaled additive penalty to the log-likelihood function, or an approximation thereof, which penalizes the fixed effects by the Jeffreys' invariant prior for the model with no random effects and the variance components by a composition of negative Huber loss functions. The resulting maximum penalized likelihood estimates are shown to lie in the interior of the parameter space. Appropriate scaling of the penalty guarantees that the penalization is soft enough to preserve the optimal asymptotic properties expected by the maximum likelihood estimator, namely consistency, asymptotic normality, and Cram\'er-Rao efficiency. Our choice of penalties and scaling factor preserves equivariance of the fixed effects estimates under linear transformation of the model parameters, such as contrasts. Maximum softly-penalized likelihood is compared to competing approaches on two real-data examples, and through comprehensive simulation studies that illustrate its superior finite sample performance.
翻译:据了解,后勤回归的最大可能性估计具有混合效应,通常会导致对参数空间的界限作出估计,这些估计数包括固定效应和单值或无限差异组成部分的无限值,可能对数值估计程序和推算造成破坏。我们对日志相似性功能或近似性采用适当规模的添加处罚,以惩罚模型之前杰弗里斯的不定效应,不产生随机效应,而差异部分则由负枢纽损失功能构成。由此得出的最大受罚概率估计值则显示在参数空间的内部。适当调整刑罚的保证,即处罚足够软,以保持最高可能性估计值所预期的最佳无损性,即一致性,无损常性正常性,以及Cram\'er-Rao效率。我们选择的罚款和缩放系数在模型参数的线性变变中保留固定效应估计的等等不均匀性,例如对比。最轻度调整的可能性与两个实际数据实例的竞争性方法相比,并通过全面模拟研究,说明其高度的定点样品性性性性能。