The generic text preprocessing pipeline, comprising Tokenisation, Normalisation, Stop Words Removal, and Stemming/Lemmatisation, has been implemented in many systems for syntactic ontology matching (OM). However, the lack of standardisation in text preprocessing creates diversity in mapping results. In this paper, we investigate the effect of the text preprocessing pipeline on syntactic OM in 8 Ontology Alignment Evaluation Initiative (OAEI) tracks with 49 distinct alignments. We find that Phase 1 text preprocessing (Tokenisation and Normalisation) is currently more effective than Phase 2 text preprocessing (Stop Words Removal and Stemming/Lemmatisation). To repair the less effective Phase 2 text preprocessing caused by unwanted false mappings, we propose a novel context-based pipeline repair approach that employs an ad hoc check to find common words that cause false mappings. These words are stored in a reserved word set and applied in text preprocessing. The experimental results show that our approach improves the matching correctness and the overall matching performance. We also discuss the integration of the classical text preprocessing pipeline with modern large language models (LLMs). We recommend that LLMs inject the text preprocessing pipeline via function calling to avoid the tendency towards unstable true mappings produced by prompt-based LLM approaches, and use LLMs to repair false mappings generated by the text preprocessing pipeline.
翻译:暂无翻译