Background: Since the launch of various generative AI tools, scientists have been striving to evaluate their capabilities and contents, in the hope of establishing trust in their generative abilities. Regulations and guidelines are emerging to verify generated contents and identify novel uses. Objective: we aspire to demonstrate how ChatGPT claims are checked computationally using the rigor of network models. We aim to achieve fact-checking of the knowledge embedded in biological graphs that were contrived from ChatGPT contents at the aggregate level. Methods: We adopted a biological networks approach that enables the systematic interrogation of ChatGPT's linked entities. We designed an ontology-driven fact-checking algorithm that compares biological graphs constructed from approximately 200,000 PubMed abstracts with counterparts constructed from a dataset generated using the ChatGPT-3.5 Turbo model. Results: in 10-samples of 250 randomly selected records a ChatGPT dataset of 1000 "simulated" articles, the fact-checking link accuracy ranged from 70% to 86%. The computational process was followed by a manual process using IntAct Interaction database and the Gene regulatory network database (GRNdb) to confirm the validity of the links identified computationally. We also found that the proximity of the edges of ChatGPT graphs were significantly shorter (90 -- 153) while literature distances were (236 -- 765). This pattern held true in all 10-samples. Conclusion: This study demonstrated high accuracy of aggregate disease-gene links relationships found in ChatGPT-generated texts. The strikingly consistent pattern offers an illuminate new biological pathways that may open the door for new research opportunities.
翻译:暂无翻译