We propose to measure fine-grained domain relevance - the degree that a term is relevant to a broad (e.g., computer science) or narrow (e.g., deep learning) domain. Such measurement is crucial for many downstream tasks in natural language processing. To handle long-tail terms, we build a core-anchored semantic graph, which uses core terms with rich description information to bridge the vast remaining fringe terms semantically. To support a fine-grained domain without relying on a matching corpus for supervision, we develop hierarchical core-fringe learning, which learns core and fringe terms jointly in a semi-supervised manner contextualized in the hierarchy of the domain. To reduce expensive human efforts, we employ automatic annotation and hierarchical positive-unlabeled learning. Our approach applies to big or small domains, covers head or tail terms, and requires little human effort. Extensive experiments demonstrate that our methods outperform strong baselines and even surpass professional human performance.
翻译:我们建议测量细微区分的域相关性 -- -- 一个术语与宽广(例如计算机科学)或窄(例如深学习)域相关的程度。这种测量对于自然语言处理中的许多下游任务至关重要。为了处理长尾术语,我们建立了一个核心分类图,用丰富的描述信息核心术语来连接尚存的大量边际术语。为了支持精细区分的域,而不依赖相应的监管机制,我们发展了等级级核心-骨肉学习,以半监督的方式学习核心和边际术语,在域的等级结构中以半监督的方式进行。为了减少昂贵的人类努力,我们采用了自动注解和等级上的正面和无标签的学习。我们的方法适用于大或小领域,覆盖头或尾术语,不需要多少人类的努力。广泛的实验表明,我们的方法超越了强的基线,甚至超过人类的专业表现。