Artificial intelligence (AI) systems can detect disease-related acoustic patterns in cough sounds, offering a scalable and cost-effective approach to tuberculosis (TB) screening in high-burden, resource-limited settings. Previous studies have been limited by small datasets, under-representation of symptomatic non-TB patients, and recordings collected in controlled environments. In this study, we enrolled 512 participants at two hospitals in Zambia, categorised into three groups: bacteriologically confirmed TB (TB+), symptomatic patients with other respiratory diseases (OR), and healthy controls (HC). Usable cough recordings with demographic and clinical data were obtained from 500 participants. Deep learning classifiers based on pre-trained speech foundation models were fine-tuned on cough recordings to predict diagnostic categories. The best-performing model, trained on 3-second audio clips, achieved an AUROC of 85.2% for distinguishing TB coughs from all other participants (TB+/Rest) and 80.1% for TB+ versus symptomatic OR participants (TB+/OR). Incorporating demographic and clinical features improved performance to 92.1% for TB+/Rest and 84.2% for TB+/OR. At a probability threshold of 0.38, the multimodal model reached 90.3% sensitivity and 73.1% specificity for TB+/Rest, meeting WHO target product profile benchmarks for TB screening. Adversarial testing and stratified analyses shows that the model was robust to confounding factors including background noise, recording time, and device variability. These results demonstrate the feasibility of cough-based AI for TB screening in real-world, low-resource settings.
翻译:暂无翻译