In recent years, there has been an increasing interest in exploiting logically specified background knowledge in order to obtain neural models (i) with a better performance, (ii) able to learn from less data, and/or (iii) guaranteed to be compliant with the background knowledge itself, e.g., for safety-critical applications. In this survey, we retrace such works and categorize them based on (i) the logical language that they use to express the background knowledge and (ii) the goals that they achieve.
翻译:近年来,人们越来越有兴趣利用逻辑上特定的背景知识,以便获得以下神经模型:(一) 性能较好,(二) 能够从较少的数据中学习,和(或) (三) 保证符合背景知识本身,例如安全关键应用,在这次调查中,我们收回这类工作,并根据(一) 它们用来表达背景知识的逻辑语言和(二) 它们实现的目标进行分类。