Given a mixture between two populations of coins, "positive" coins that each have -- unknown and potentially different -- bias $\geq\frac{1}{2}+\Delta$ and "negative" coins with bias $\leq\frac{1}{2}-\Delta$, we consider the task of estimating the fraction $\rho$ of positive coins to within additive error $\epsilon$. We achieve an upper and lower bound of $\Theta(\frac{\rho}{\epsilon^2\Delta^2}\log\frac{1}{\delta})$ samples for a $1-\delta$ probability of success, where crucially, our lower bound applies to all fully-adaptive algorithms. Thus, our sample complexity bounds have tight dependence for every relevant problem parameter. A crucial component of our lower bound proof is a decomposition lemma (see Lemmas 17 and 18) showing how to assemble partially-adaptive bounds into a fully-adaptive bound, which may be of independent interest: though we invoke it for the special case of Bernoulli random variables (coins), it applies to general distributions. We present simulation results to demonstrate the practical efficacy of our approach for realistic problem parameters for crowdsourcing applications, focusing on the "rare events" regime where $\rho$ is small. The fine-grained adaptive flavor of both our algorithm and lower bound contrasts with much previous work in distributional testing and learning.
翻译:鉴于两种硬币的混合,两种硬币的“正”硬币 -- -- 未知的和潜在的不同 -- -- 偏差$\geq\frac{1 ⁇ 2 ⁇ 2 ⁇ Delta$和偏差$\delta$的“负”硬币的偏差, 偏差$\leq\frac{1 ⁇ 2}1 ⁇ 2}\Delta美元和偏差的“负”硬币的混合体, 我们考虑的是估算正硬硬币的分数分数到添加错误$epslon。 我们的下限证据中的一个关键部分是分解的 emma( 见 Lemmas 17 和 18 ), 显示如何将部分适应的捆绑在完全适应的框框中, 美元2\\\\\\\\ lag\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\