Machine Learning (ML) has recently been demonstrated to rival expert-level human accuracy in prediction and detection tasks in a variety of domains, including medicine. Despite these impressive findings, however, a key barrier to the full realization of ML's potential in medical prognoses is technology acceptance. Recent efforts to produce explainable AI (XAI) have made progress in improving the interpretability of some ML models, but these efforts suffer from limitations intrinsic to their design: they work best at identifying why a system fails, but do poorly at explaining when and why a model's prediction is correct. We posit that the acceptability of ML predictions in expert domains is limited by two key factors: the machine's horizon of prediction that extends beyond human capability, and the inability for machine predictions to incorporate human intuition into their models. We propose the use of a novel ML architecture, Neural Ordinary Differential Equations (NODEs) to enhance human understanding and encourage acceptability. Our approach prioritizes human cognitive intuition at the center of the algorithm design, and offers a distribution of predictions rather than single outputs. We explain how this approach may significantly improve human-machine collaboration in prediction tasks in expert domains such as medical prognoses. We propose a model and demonstrate, by expanding a concrete example from the literature, how our model advances the vision of future hybrid Human-AI systems.
翻译:尽管这些令人印象深刻的发现,但是,尽管在医学预测和检测方面,在医学预测和检测任务方面,人们对专家一级的人类精确度的预测和探测任务表现出了对专家一级的准确性(ML),但是,尽管这些令人印象深刻的发现,妨碍充分发挥ML在医学预测方面潜力的一个关键障碍是技术的接受。最近为制作可解释的AI(XAI)而做出的努力在改进某些ML模型的可解释性方面取得了进展,但这些努力在设计上受到内在限制:它们最能找出系统失败的原因,但在解释模型预测何时正确和为何正确方面做得不够。我们认为,专家领域的ML预测的可接受性受到两个关键因素的限制:超出人类能力范围的机器预测前景,以及机器预测无法将人类直觉纳入模型。我们提议使用新型的ML结构,神经普通差异值(NODEs)来提高人类的理解和鼓励其可接受性。我们的方法把人类认知性模型放在算法设计的中心位置上,并且提供预测的分布而不是单一产出。我们解释这一方法如何大大改进人类机器在预测领域的具体合作,我们从专家领域提出如何发展。