Philosophers have recently focused on critical, epistemological challenges that arise from the opacity of deep neural networks. One might conclude from this literature that doing good science with opaque models is exceptionally challenging, if not impossible. Yet, this is hard to square with the recent boom in optimism for AI in science alongside a flood of recent scientific breakthroughs driven by AI methods. In this paper, I argue that the disconnect between philosophical pessimism and scientific optimism is driven by a failure to examine how AI is actually used in science. I show that, in order to understand the epistemic justification for AI-powered breakthroughs, philosophers must examine the role played by deep learning as part of a wider process of discovery. The philosophical distinction between the 'context of discovery' and the 'context of justification' is helpful in this regard. I demonstrate the importance of attending to this distinction with two cases drawn from the scientific literature, and show that epistemic opacity need not diminish AI's capacity to lead scientists to significant and justifiable breakthroughs.
翻译:哲学家最近集中关注深层神经网络的不透明性所产生的关键、认知性的挑战。从这一文献中可以得出结论,用不透明的模型进行良好的科学即使不是不可能,也是极具挑战性的。然而,这很难与最近对AI科学的乐观主义的兴起和最近由AI方法驱动的科学突破的泛滥相正比。在本文中,我认为哲学悲观主义和科学乐观之间的脱节是由于未能研究AI在科学中的实际应用。我表明,为了理解AI动力突破的认知性理由,哲学家们必须研究深层次学习作为更广泛发现过程的一部分所发挥的作用。 “发现理论”和“解释理论”之间的哲学区分在这方面很有帮助。我表明,从科学文献中引出的两个案例来进行这种区分的重要性,并且表明,教义性的不透明性不需要削弱AI引导科学家进行重大和合理突破的能力。