Functional Electrical Stimulation (FES) is a technique to evoke muscle contraction through low-energy electrical signals. FES can animate paralysed limbs. Yet, an open challenge remains on how to apply FES to achieve desired movements. This challenge is accentuated by the complexities of human bodies and the non-stationarities of the muscles' responses. The former causes difficulties in performing inverse dynamics, and the latter causes control performance to degrade over extended periods of use. Here, we engage the challenge via a data-driven approach. Specifically, we learn to control FES through Reinforcement Learning (RL) which can automatically customise the stimulation for the patients. However, RL typically has Markovian assumptions while FES control systems are non-Markovian because of the non-stationarities. To deal with this problem, we use a recurrent neural network to create Markovian state representations. We cast FES controls into RL problems and train RL agents to control FES in different settings in both simulations and the real world. The results show that our RL controllers can maintain control performances over long periods and have better stimulation characteristics than PID controllers.
翻译:功能电动刺激(FES)是一种通过低能电信号唤起肌肉收缩的技术。 FES可以使瘫痪肢体发生动静。然而,在如何应用FES实现预期运动方面仍然存在一个公开的挑战。这个挑战由于人体身体的复杂性和肌肉反应的不稳定性而更加突出。 前者在进行反向动态方面造成困难,而后者导致控制性能在长时间内降解。 我们在这里通过数据驱动的方法来应对挑战。 具体地说,我们通过强化学习(RL)来控制FES,这可以自动定制病人的刺激。 然而,RL通常有Markovian的假设,而FES控制系统由于非静止性,是非马尔科维安的。 为了解决这个问题,我们使用一个经常性的神经网络来创建马尔科维尼亚州的代表机构。 我们把FES控制放在RL问题中,并训练RL代理机构在模拟和现实世界的不同环境中控制FES。结果显示,我们的RL控制者可以长期控制性能,比PID控制器有更好的刺激性能。