Consider $n$ items, each of which is characterised by one of $d+1$ possible features in $\{0, \ldots, d\}$. We study the inference task of learning these types by queries on subsets, or pools, of the items that only reveal a form of coarsened information on the features - in our case, the sum of all the features in the pool. This is a realistic scenario in situations where one has memory or technical constraints in the data collection process, or where the data is subject to anonymisation. Related prominent problems are the quantitative group testing problem, of which it is a generalisation, as well as the compressed sensing problem, of which it is a special case. In the present article, we are interested in the minimum number of queries needed to efficiently infer the labels, if one of the features, say $0$, is dominant in the sense that the number $k$ of non-zero features among the items is much smaller than $n$. It is known that in this case, all features can be recovered in exponential time by using no more than $O(k)$ queries. However, so far, all \textit{efficient} inference algorithms required at least $\Omega(k\ln n)$ queries, and it was unknown whether this gap is artificial or of a fundamental nature. Here we show that indeed, the previous gap between the information-theoretic and computational bounds is not inherent to the problem by providing an efficient algorithm that succeeds with high probability and employs no more than $O(k)$ measurements. This also solves a long standing open question for the quantitative group testing problem.
翻译:考虑美元项目, 每一个项目都以美元+1的可能特性为单位, 以美元=0,\ldots, d ⁇ $为单位。 我们研究通过对子集或集合查询来学习这些类型的推断任务, 仅显示关于这些特性的一种粗化信息, 就我们而言, 集合中所有特性的总和。 这是一个现实的假设, 在某个项目有内存或技术限制, 或数据被匿名化的情况下。 与此相关的突出问题是数量组测试问题, 这个问题是一个概括性的问题, 以及压缩的测算问题, 这是一个特殊的例子。 在目前的文章中, 我们感兴趣的是, 要有效推算标签所需的最低数量, 如果其中一个特性, 说美元, 集合中所有特性的非零美元数量比美元少得多。 众所周知, 在本案中, 所有特性都可以在指数化时间里恢复, 使用不超过美元(k) 美元(k) 和 美元(k) 精度的精确度的测算法, 也就是在之前的测算中, 最不确定的值 。