Agent-based modelling is a powerful tool when simulating human systems, yet when human behaviour cannot be described by simple rules or maximising one's own profit, we quickly reach the limits of this methodology. Machine learning has the potential to bridge this gap by providing a link between what people observe and how they act in order to reach their goal. In this paper we use a framework for agent-based modelling that utilizes human values like fairness, conformity and altruism. Using this framework we simulate a public goods game and compare to experimental results. We can report good agreement between simulation and experiment and furthermore find that the presented framework outperforms strict reinforcement learning. Both the framework and the utility function are generic enough that they can be used for arbitrary systems, which makes this method a promising candidate for a foundation of a universal agent-based model.
翻译:在模拟人类系统时,以代理为基础的建模是一种强有力的工具,然而,当人类行为无法通过简单的规则来描述,或者无法使个人的利益最大化时,我们很快达到这一方法的极限。机器学习通过提供人们观察的内容和他们如何行动以实现其目标之间的联系,就有可能弥合这一差距。在本文中,我们使用一个以代理为基础的建模框架,利用公平、合规和利他主义等人类价值观。利用这个框架,我们模拟一个公益游戏并与实验结果进行比较。我们可以报告模拟和实验之间的良好协议,并进一步发现所提出的框架优于严格的强化学习。框架和实用功能都是通用的,足以用于任意系统,这使得这一方法成为普遍以代理为基础的模型基础的一个有希望的候选。