An important part of law and regulation is demanding explanations for actual and potential failures. We ask questions like: What happened (or might happen) to cause this failure? And why did (or might) it happen? These are disguised normative questions - they really ask what ought to have happened, and how the humans involved ought to have behaved. To answer the normative questions, law and regulation seeks a narrative explanation, a story. At present, we seek these kinds of narrative explanation from AI technology, because as humans we seek to understand technology's working through constructing a story to explain it. Our cultural history makes this inevitable - authors like Asimov, writing narratives about future AI technologies like intelligent robots, have told us that they act in ways explainable by the narrative logic which we use to explain human actions and so they can also be explained to us in those terms. This is, at least currently, not true. This work argues that we can only solve this problem by working from both sides. Technologists will need to find ways to tell us stories which law and regulation can use. But law and regulation will also need to accept different kinds of narratives, which tell stories about fundamental legal and regulatory concepts like fairness and reasonableness that are different from those we are used to.
翻译:法律和法规的一个重要部分是要求解释实际和潜在失败的原因。 我们问了一些问题: 发生了(或可能发生)什么(或可能发生)导致失败的原因? 为什么发生(或可能发生)了? 这些是变相的规范性问题 - 它们真的问了应该发生什么, 以及所涉及的人应该如何行事。 为了回答规范性问题, 法律和法规需要一个叙述性的解释, 一个故事。 目前, 我们从AI技术中寻求这些叙述性的解释, 因为人类试图理解技术正在通过编造一个故事来解释它。 我们的文化历史使得这个不可避免的—— 作者如Asimov, 写关于未来AI技术的叙事, 像智能机器人一样, 告诉我们, 他们的行为方式可以解释我们用来解释人类行动的叙事逻辑, 并且也可以用这些术语向我们解释。 至少在目前,这并不是真的。 这项工作表明, 我们只能通过双方的工作来解决这个问题。 技术学家们需要找到方法来告诉我们哪些法律和法规可以使用的故事。 但是法律和法规也需要接受不同的叙事, 这些叙事来自我们所使用的基本的法律和监管概念。