Cross-dataset testing is critical for examination of a machine learning (ML) model's performance. However, most of the studies on modelling transcriptomic and clinical data only conducted intra-dataset testing. Normalization and non-differentially expression genes (NDEG) can improve cross-platform classification performance of ML. Therefore, we aim to understanding whether normalization, DEG and data source are associated with performance of ML in cross-dataset testing. The transcriptomic and clinical data shared by the lung adenocarcinoma cases in TCGA and ONCOSG were used. The best cross-dataset ML performance was reached using transcriptomic data alone and statistically better than those using transcriptomic and clinical data. The best balance accuracy (BA), area under curve (AUC) and accuracy were significantly better in ML algorithms training on TCGA and tested on ONCOSG than those trained on ONCOSG and tested on TCGA (p less than 0.05 for all). Normalization and NDEG greatly improved intra-dataset BA, ACU and accuracy in both datasets, but only nominally improved these metrics in cross-dataset training/testing. Strikingly, modelling transcriptomic data of ONCOSG alone outperformed modelling transcriptomic and clinical data. However, inclusion of clinical data in TCGA did not significantly change the ML performance, suggesting little values of clinical data of TCGA. Interestingly, the performance improvements in intra-dataset testing were more prominent in the ML models trained on ONCOSG than those on TCGA. Our data thus show data source, normalization and NDEG are associated with intra-dataset and cross-dataset ML performance in modelling transcriptomic and clinical data. Future works are warranted to understand and reduce ML performance differences in cross-dataset modeling.
翻译:暂无翻译