Awareness of the possible impacts associated with artificial intelligence has risen in proportion to progress in the field. While there are tremendous benefits to society, many argue that there are just as many, if not more, concerns related to advanced forms of artificial intelligence. Accordingly, research into methods to develop artificial intelligence safely is increasingly important. In this paper, we provide an overview of one such safety paradigm: containment with a critical lens aimed toward generative adversarial networks and potentially malicious artificial intelligence. Additionally, we illuminate the potential for a developmental blindspot in the stovepiping of containment mechanisms.
翻译:对人造情报可能产生的影响的认识与该领域的进展成正比。虽然对社会有很大的好处,但许多人认为,在先进的人造情报形式方面,也有同样多甚至更多的关切。因此,研究安全开发人造情报的方法越来越重要。在本文件中,我们概述了这样一种安全范例:以批判的透镜为对象的封闭,以基因对抗网络和潜在的恶意人造情报为目的。此外,我们揭示了在封闭机制的炉灶中发展盲点的潜力。