Disparities in access to healthcare have been well-documented in the United States, but their effects on electronic health record (EHR) data reliability and resulting clinical models are poorly understood. Using an All of Us dataset of 134,513 participants, we investigate the effects of access to care on the medical machine learning pipeline, including medical condition rates, data quality, outcome label accuracy, and prediction performance. Our findings reveal that patients with cost constrained or delayed care have worse EHR reliability as measured by patient self-reported conditions for 78% of examined medical conditions. We demonstrate in a prediction task of Type II diabetes incidence that clinical risk predictive performance can be worse for patients without standard care, with balanced accuracy gaps of 3.6 and sensitivity gaps of 9.4 percentage points for those with cost-constrained or delayed care. We evaluate solutions to mitigate these disparities and find that including patient self-reported conditions improved performance for patients with lower access to care, with 11.2 percentage points higher sensitivity, effectively decreasing the performance gap between standard versus delayed or cost-constrained care. These findings provide the first large-scale evidence that healthcare access systematically affects both data reliability and clinical prediction performance. By revealing how access barriers propagate through the medical machine learning pipeline, our work suggests that improving model equity requires addressing both data collection biases and algorithmic limitations. More broadly, this analysis provides an empirical foundation for developing clinical prediction systems that work effectively for all patients, regardless of their access to care.
翻译:暂无翻译