Most Fairness in AI research focuses on exposing biases in AI systems. A broader lens on fairness reveals that AI can serve a greater aspiration: rooting out societal inequities from their source. Specifically, we focus on inequities in health information, and aim to reduce bias in that domain using AI. The AI algorithms under the hood of search engines and social media, many of which are based on recommender systems, have an outsized impact on the quality of medical and health information online. Therefore, embedding bias detection and reduction into these recommender systems serving up medical and health content online could have an outsized positive impact on patient outcomes and wellbeing. In this position paper, we offer the following contributions: (1) we propose a novel framework of Fairness via AI, inspired by insights from medical education, sociology and antiracism; (2) we define a new term, bisinformation, which is related to, but distinct from, misinformation, and encourage researchers to study it; (3) we propose using AI to study, detect and mitigate biased, harmful, and/or false health information that disproportionately hurts minority groups in society; and (4) we suggest several pillars and pose several open problems in order to seed inquiry in this new space. While part (3) of this work specifically focuses on the health domain, the fundamental computer science advances and contributions stemming from research efforts in bias reduction and Fairness via AI have broad implications in all areas of society.
翻译:AI研究中最公平的是揭露AI系统中的偏见。关于公平问题的更广泛视角表明,AI可以起到更大的作用:从根源上根除社会不平等。具体地说,我们注重健康信息中的不平等,并力求通过AI减少该领域的偏见。AI算法以搜索引擎和社交媒体为首,其中许多以建议系统为基础,对网上医疗和健康信息的质量产生了巨大影响。因此,将偏见检测和减少纳入在线医疗和保健内容的推荐系统可能对患者的结果和福利产生超乎寻常的积极影响。在本立场文件中,我们提出以下意见:(1) 我们提出通过AI实现公平的新框架,其灵感来自医学教育、社会学和反种族主义;(2) 我们定义了一个新的术语,即与错误信息有区别的新术语,并鼓励研究人员研究这一信息的质量。(3) 我们建议使用AI研究、检测和减轻偏见、有害和/或错误的健康信息,从而对社会中少数群体造成不相称的伤害;(4) 我们提出若干支柱和提出若干公开问题,以便从医学教育、社会学界的深刻影响出发,通过这一新的科学领域进行基础性研究。(3) 部分是减少卫生领域和科学领域的所有工作。