Ultra-fine entity typing (UFET) predicts extremely free-formed types (e.g., president, politician) of a given entity mention (e.g., Joe Biden) in context. State-of-the-art (SOTA) methods use the cross-encoder (CE) based architecture. CE concatenates the mention (and its context) with each type and feeds the pairs into a pretrained language model (PLM) to score their relevance. It brings deeper interaction between mention and types to reach better performance but has to perform N (type set size) forward passes to infer types of a single mention. CE is therefore very slow in inference when the type set is large (e.g., N = 10k for UFET). To this end, we propose to perform entity typing in a recall-expand-filter manner. The recall and expand stages prune the large type set and generate K (K is typically less than 256) most relevant type candidates for each mention. At the filter stage, we use a novel model called MCCE to concurrently encode and score these K candidates in only one forward pass to obtain the final type prediction. We investigate different variants of MCCE and extensive experiments show that MCCE under our paradigm reaches SOTA performance on ultra-fine entity typing and is thousands of times faster than the cross-encoder. We also found MCCE is very effective in fine-grained (130 types) and coarse-grained (9 types) entity typing. Our code is available at \url{https://github.com/modelscope/AdaSeq/tree/master/examples/MCCE}.
翻译:超大实体打字(UFET)预测一个特定实体(例如Joe Biden)的非常免费的类型(如总裁、政治家)在上下文中预测一个特定实体提到的类型(如Joe Biden)非常自由(如总统、政治家)。 最先进的(SOTA)方法使用跨编码(CE)的架构。 CE 将提及(及其上下文)与每种类型混为一身,并将双对配对输入一个经过预先训练的语言模型(PLM),使其具有相关性。 它使提及和类型之间的更深层次互动能够达到更好的性能,但必须执行N(类型设定大小)来推断一个单一名称的类型。 因此,当该类型(如Jo Biden Biden)使用“最先进的”方法(例如Joe Biden Biden ) 时,CE非常缓慢的推断。 为此,我们提议用回溯到的版本(KBCBA/CF) 标准中我们发现的最大版本/CLA 格式中的新模型,我们在SBA 版本中可以追溯到一个实体。