Explaining the behaviour of intelligent systems will get increasingly and perhaps intractably challenging as models grow in size and complexity. We may not be able to expect an explanation for every prediction made by a brain-scale model, nor can we expect explanations to remain objective or apolitical. Our functionalist understanding of these models is of less advantage than we might assume. Models precede explanations, and can be useful even when both model and explanation are incorrect. Explainability may never win the race against complexity, but this is less problematic than it seems.
翻译:随着模型的大小和复杂性的增大,解释智能系统的行为将变得越来越困难,甚至可能会变得难以理解。 我们也许无法期望对由大脑规模模型所作的每一项预测做出解释,我们也无法期望解释保持客观或非政治性。 我们对这些模型的功能主义者理解不如我们想象的那么好。 模型在解释之前是有用的,即使在模型和解释不正确的情况下也是有用的。 解释性可能永远无法赢得复杂的竞赛,但问题比看起来要小。