We propose to build directly upon our longstanding, prior r&d in AI/machine ethics in order to attempt to make real the blue-sky idea of AI that can thwart mass shootings, by bringing to bear its ethical reasoning. The r&d in question is overtly and avowedly logicist in form, and since we are hardly the only ones who have established a firm foundation in the attempt to imbue AI's with their own ethical sensibility, the pursuit of our proposal by those in different methodological camps should, we believe, be considered as well. We seek herein to make our vision at least somewhat concrete by anchoring our exposition to two simulations, one in which the AI saves the lives of innocents by locking out a malevolent human's gun, and a second in which this malevolent agent is allowed by the AI to be neutralized by law enforcement. Along the way, some objections are anticipated, and rebutted.
翻译:我们建议直接利用我们长期以来在AI/机器道德方面的先期经验,以便通过应用其道德推理,实现可以阻止大规模枪击的AI蓝空思想。 有关r&d在形式上是公开的、公开的逻辑主义,而且由于我们不是唯一一个为试图用自己的道德意识来掩盖AI的坚实基础的人,我们认为,也应该考虑不同方法阵营中的人对我们提案的追求。 我们在此寻求通过将我们的发言定位在两个模拟中,至少使我们的愿景有些具体化。 其中一个模拟中,AI通过锁定邪恶的人类枪来拯救无辜者的生命,第二个是AI允许这个邪恶的代理人被执法部门击退。 顺便说一句,有些反对意见是预料到的,并被反驳的。