What would the inputs be to a machine whose output is the destabilization of a robust democracy, or whose emanations could disrupt the political power of nations? In the recent essay "The Coming AI Hackers," Schneier (2021) proposed a future application of artificial intelligences to discover, manipulate, and exploit vulnerabilities of social, economic, and political systems at speeds far greater than humans' ability to recognize and respond to such threats. This work advances the concept by applying to it theory from machine learning, hypothesizing some possible "featurization" (input specification and transformation) frameworks for AI hacking. Focusing on the political domain, we develop graph and sequence data representations that would enable the application of a range of deep learning models to predict attributes and outcomes of political systems. We explore possible data models, datasets, predictive tasks, and actionable applications associated with each framework. We speculate about the likely practical impact and feasibility of such models, and conclude by discussing their ethical implications.
翻译:投给一个其产出是破坏强大民主国家稳定的机器,或者其智慧会破坏各国政治力量的机器,其投入会是什么?在最近的一篇题为“即将到来的AI Hackers”的文章中,Schneier(2021年)提议今后应用人工智能来发现、操纵和利用社会、经济和政治系统的脆弱性,其速度远远超过人类认识和应对这种威胁的能力。这项工作通过从机器学习中应用理论来推进这一概念,对AI黑客的一些可能的“Featurization”(投入规格和转变)框架进行虚伪。我们以政治领域为重点,开发图表和顺序数据显示,以便能够应用一系列深层次的学习模型来预测政治系统的属性和结果。我们探索可能的数据模型、数据集、预测任务以及与每个框架相关的可操作的应用。我们通过讨论这些模型的道德影响来探讨这些模型的可能实际影响和可行性。