Consider the problem of binary hypothesis testing. Given $Z$ coming from either $\mathbb P^{\otimes m}$ or $\mathbb Q^{\otimes m}$, to decide between the two with small probability of error it is sufficient and in most cases necessary to have $m \asymp 1/\epsilon^2$, where $\epsilon$ measures the separation between $\mathbb P$ and $\mathbb Q$ in total variation ($\mathsf{TV}$). Achieving this, however, requires complete knowledge of the distributions and can be done, for example, using the Neyman-Pearson test. In this paper we consider a variation of the problem, which we call likelihood-free (or simulation-based) hypothesis testing, where access to $\mathbb P$ and $\mathbb Q$ is given through $n$ iid observations from each. In the case when $\mathbb P,\mathbb Q$ are assumed to belong to a non-parametric family $\mathcal P$, we demonstrate the existence of a fundamental trade-off between $n$ and $m$ given by $nm \asymp n^2_\mathsf{GoF}(\epsilon,\cal P)$, where $n_\mathsf{GoF}$ is the minimax sample complexity of testing between the hypotheses $H_0: \mathbb P= \mathbb Q$ vs $H_1: \mathsf{TV}(\mathbb P,\mathbb Q) \ge \epsilon$. We show this for three families of distributions: $\beta$-smooth densities supported on $[0,1]^d$, the Gaussian sequence model over a Sobolev ellipsoid, and the collection of distributions on alphabet $[k]=\{1,2,\dots,k\}$ with pmfs bounded by $c/k$ for fixed $c$. For the larger family of all distributions on $[k]$ we obtain a more complicated trade-off that exhibits a phase-transition. The test that we propose, based on the $L^2$-distance statistic of Ingster, simultaneously achieves all points on the trade-off curve for the regular classes. This demonstrates the possibility of testing without fully estimating the distributions, provided $m\gg1/\epsilon^2$.
翻译:暂无翻译