Estimating the difficulty of a dataset typically involves comparing state-of-the-art models to humans; the bigger the performance gap, the harder the dataset is said to be. Not only is this framework informal, but it also provides little understanding of how difficult each instance is, or what attributes make it difficult for a given model. To address these problems, we propose an information-theoretic perspective, framing dataset difficulty as the absence of $\textit{usable information}$. Measuring usable information is as easy as measuring performance, but has certain theoretical advantages. While the latter only allows us to compare different models w.r.t the same dataset, the former also allows us to compare different datasets w.r.t the same model. We then introduce $\textit{pointwise}$ $\mathcal{V}-$$\textit{information}$ (PVI) for measuring the difficulty of individual instances, where instances with higher PVI are easier for model $\mathcal{V}$. By manipulating the input before measuring usable information, we can understand $\textit{why}$ a dataset is easy or difficult for a given model, which we use to discover annotation artefacts in widely-used benchmarks.
翻译:估计数据集的难度通常涉及将最先进的模型与人进行比较; 性能差距越大, 数据集据说就越难。 不仅这个框架是非正式的, 而且它也使我们很少了解每个实例有多困难, 或是什么属性使得给给给一个模型造成困难。 为了解决这些问题, 我们提出了一个信息理论角度, 将数据集难度与缺少 $\ textit{ 可使用信息 $ 一样构建起来。 测量可用信息与测量性能一样容易, 但有一些理论优势。 虽然后者只允许我们比较不同的模型 w.r.t 同一数据集, 前者也允许我们比较不同的数据集 w.r. textit{ pintwide} $$\ mathcal{V} $$\ textitit{ info} $ (PVI) 来测量单个实例的难度。 在模型 $\ mathcal{V} 中, 高PVI 的例子比较容易, 但也有一些理论优势。 通过在测量可用信息之前对输入进行调控, 我们理解 $\ textitriit} a drequestalse to we hind to hind to hind to we hind to hindflester.