We consider the problem of creating assistants that can help agents - often humans - solve novel sequential decision problems, assuming the agent is not able to specify the reward function explicitly to the assistant. Instead of aiming to automate, and act in place of the agent as in current approaches, we give the assistant an advisory role and keep the agent in the loop as the main decision maker. The difficulty is that we must account for potential biases induced by limitations or constraints of the agent which may cause it to seemingly irrationally reject advice. To do this we introduce a novel formalization of assistance that models these biases, allowing the assistant to infer and adapt to them. We then introduce a new method for planning the assistant's advice which can scale to large decision making problems. Finally, we show experimentally that our approach adapts to these agent biases, and results in higher cumulative reward for the agent than automation-based alternatives.
翻译:我们考虑设立助理的问题,这些助理可以帮助代理人(往往是人类)解决新的连续决策问题,假设代理人无法明确指定助理的奖赏功能。我们让助理发挥咨询作用,并让代理人作为主要决策人,在循环中担任主要决策人。困难在于,我们必须说明由于代理人的限制或制约而可能造成的潜在偏见,这可能导致它似乎不合理地拒绝建议。要做到这一点,我们引入一种新的援助正规化,以模拟这些偏见,使助理能够推断和适应这些偏见。然后我们引入一种新的方法来规划助理的建议,该方法可以大范围地处理决策问题。最后,我们实验性地表明,我们的方法适应了这些代理人的偏见,并且给代理人带来比自动化替代方法更高的累积奖励。