Large-language Models (LLMs) need to adopt Retrieval-Augmented Generation (RAG) to generate factual responses that are better suited to knowledge-based applications in the design process. We present a data-driven method to identify explicit facts of the form - head entity :: relationship :: tail entity from patented artefact descriptions. We train roBERTa Transformer-based sequence classification models using our proprietary dataset of 44,227 sentences. Upon classifying tokens in a sentence as entities or relationships, our method uses another classifier to identify specific relationship tokens for a given pair of entities. We compare the performances against linear classifiers and Graph Neural Networks (GNNs) that both incorporate BERT Transformer-based token embeddings to predict associations among the entities and relationships. We apply our method to 4,870 fan system related patents and populate a knowledge base that constitutes around 3 million facts. Using the knowledge base, we demonstrate retrieving generalisable and specific domain knowledge for contextualising LLMs.
翻译:暂无翻译