What we expect from radiology AI algorithms will shape the selection and implementation of AI in the radiologic practice. In this paper I consider prevailing expectations of AI and compare them to expectations that we have of human readers. I observe that the expectations from AI and radiologists are fundamentally different. The expectations of AI are based on a strong and justified mistrust about the way that AI makes decisions. Because AI decisions are not well understood, it is difficult to know how the algorithms will behave in new, unexpected situations. However, this mistrust is not mirrored in our expectations of human readers. Despite well-proven idiosyncrasies and biases in human decision making, we take comfort from the assumption that others make decisions in a way as we do, and we trust our own decision making. Despite poor ability to explain decision making processes in humans, we accept explanations of decisions given by other humans. Because the goal of radiology is the most accurate radiologic interpretation, our expectations of radiologists and AI should be similar, and both should reflect a healthy mistrust of complicated and partially opaque decision processes undergoing in computer algorithms and human brains. This is generally not the case now.
翻译:我们从放射学中期待的AI算法将影响放射学实践中的AI的选择和实施。在本文中,我考虑了AI的普遍期望,并将这些期望与我们对人类读者的期望进行比较。我注意到,AI和放射学家的期望从根本上不同。AI的期望基于对AI决策方式的强烈和正当的不信任。由于AI的决定不十分清楚,因此很难知道这些算法在新的、意想不到的情况中将如何运作。然而,这种不信任并不反映在我们对人类读者的期望中。尽管在人类决策中存在充分证明的特质和偏见,但我们感到欣慰的是,假设其他人以我们的方式作出决定,而我们相信我们自己的决策。尽管在解释人类决策过程方面能力差,但我们接受对其他人所作决定的解释。由于放射学的目标是最准确的放射学解释,我们对放射学家和AI的期望应该是相似的,而且两者都应该反映对计算机算法和人类大脑中复杂和部分不透明的决策过程的健康不信任。现在的情况一般不是这样。