Explainable Artificial Intelligence (XAI) research gained prominence in recent years in response to the demand for greater transparency and trust in AI from the user communities. This is especially critical because AI is adopted in sensitive fields such as finance, medicine etc., where implications for society, ethics, and safety are immense. Following thorough systematic evaluations, work in XAI has primarily focused on Machine Learning (ML) for categorization, decision, or action. To the best of our knowledge, no work is reported that offers an Explainable Reinforcement Learning (XRL) method for trading financial stocks. In this paper, we proposed to employ SHapley Additive exPlanation (SHAP) on a popular deep reinforcement learning architecture viz., deep Q network (DQN) to explain an action of an agent at a given instance in financial stock trading. To demonstrate the effectiveness of our method, we tested it on two popular datasets namely, SENSEX and DJIA, and reported the results.
翻译:近年来,针对用户社区对提高AI的透明度和信任的要求,可解释的人工智能(XAI)研究在最近几年越来越突出,这特别关键,因为AI是在对社会、道德和安全影响巨大的金融、医学等敏感领域通过的,经过彻底的系统评估,XAI的工作主要侧重于机器学习(ML),用于分类、决定或行动。据我们所知,没有报告为金融股票交易提供可解释的强化学习(XRL)方法的工作。在本文件中,我们提议采用Shanapley adtive Explanation(SHAP),用于一个广受欢迎的深度强化学习结构,即深层次的Q网络(DQN),以解释金融股票交易中某个代理人的行动。为了证明我们的方法的有效性,我们用两个流行的数据组,即SENSEX和DJIA测试了它,并报告了结果。