Experimental advances enabling high-resolution external control create new opportunities to produce materials with exotic properties. In this work, we investigate how a multi-agent reinforcement learning approach can be used to design external control protocols for self-assembly. We find that a fully decentralized approach performs remarkably well even with a "coarse" level of external control. More importantly, we see that a partially decentralized approach, where we include information about the local environment allows us to better control our system towards some target distribution. We explain this by analyzing our approach as a partially-observed Markov decision process. With a partially decentralized approach, the agent is able to act more presciently, both by preventing the formation of undesirable structures and by better stabilizing target structures as compared to a fully decentralized approach.
翻译:允许高分辨率外部控制的实验性进步为生产具有外来特性的材料创造了新的机会。 在这项工作中,我们调查如何使用多剂强化学习方法来设计自我组装的外部控制协议。我们发现,完全分散化的方法即使有“粗糙”的外部控制水平,也表现得相当好。更重要的是,我们看到,部分分散化的方法,其中我们包括关于当地环境的信息,可以使我们更好地控制我们的系统,实现某种目标分布。我们通过分析我们的方法,将其作为部分观察的Markov决策程序来解释这一点。通过部分分散化的方法,该代理人能够更先入为主地采取行动,既防止形成不良结构,又比完全分散化的方法更好地稳定目标结构。