Capture-recapture experiments are widely used to estimate the abundance of a finite population. Based on capture-recapture data, the empirical likelihood (EL) method has been shown to outperform the conventional conditional likelihood (CL) method. However, the current literature on EL abundance estimation ignores behavioral effects, and the EL estimates may not be stable, especially when the capture probability is low. We make three contributions in this paper. First, we extend the EL method to capture-recapture models that account for behavioral effects. Second, to overcome the instability of the EL method, we propose a penalized EL (PEL) estimation method that penalizes large abundance values. We then investigate the asymptotics of the maximum PEL estimator and the PEL ratio statistic. Third, we develop standard expectation-maximization (EM) algorithms for PEL to improve its practical performance. The EM algorithm is also applicable to EL and CL with slight modifications. Our simulation and a real-world data analysis demonstrate that the PEL method successfully overcomes the instability of the EL method and the proposed EM algorithm produces more reliable results than existing optimization algorithms.
翻译:捕获- 捕获- 抓获实验被广泛用于估计一定人口丰度。 根据捕获- 抓获数据, 实验可能性(EL) 方法被证明超过常规有条件可能性(CL) 方法。 然而, 目前有关EL丰度估计的文献忽略了行为效应, EL估计值可能不稳定, 特别是在捕获概率低的情况下。 我们在本文件中做了三项贡献。 首先, 我们扩展EL 方法以捕捉- 捕捉模型来计算行为效果。 第二, 为了克服EL 方法的不稳定性, 我们提出了一个惩罚性EL (EL) 估计方法, 惩罚大量丰度值。 我们随后调查了最大 PEL 估计值和 PEL 比率统计的测试值。 第三, 我们为PEL 开发了标准期待- 比例算法, 以提高其实际性能。 EM 算法还适用于EL 和 CL, 稍作修改。 我们的模拟和真实世界数据分析表明, PEL 方法成功地克服了EL 方法的不稳定性, 并且拟议的EM 算法比现有最可靠的分析结果更可靠。