Reinforcement Learning (RL) has the promise of providing data-driven support for decision-making in a wide range of problems in healthcare, education, business, and other domains. Classical RL methods focus on the mean of the total return and, thus, may provide misleading results in the setting of the heterogeneous populations that commonly underlie large-scale datasets. We introduce the K-Heterogeneous Markov Decision Process (K-Hetero MDP) to address sequential decision problems with population heterogeneity. We propose the Auto-Clustered Policy Evaluation (ACPE) for estimating the value of a given policy, and the Auto-Clustered Policy Iteration (ACPI) for estimating the optimal policy in a given policy class. Our auto-clustered algorithms can automatically detect and identify homogeneous sub-populations, while estimating the Q function and the optimal policy for each sub-population. We establish convergence rates and construct confidence intervals for the estimators obtained by the ACPE and ACPI. We present simulations to support our theoretical findings, and we conduct an empirical study on the standard MIMIC-III dataset. The latter analysis shows evidence of value heterogeneity and confirms the advantages of our new method.
翻译:加强强化学习(RL)有望为保健、教育、商业和其他领域一系列广泛问题的决策提供数据驱动支持; 典型的RL方法侧重于总回报的平均值,因此,在确定大比例数据集中常见的不同人群时,可能会产生误导的结果; 我们引入K-Heterogeneous Markov 决策程序(K-Hetero MDP),以解决人口不均度的相继决策问题; 我们提议采用自动封闭政策评价(ACPE)来估计某项政策的价值, 和自动封闭政策转换(ACPI)来估计某一政策类别的最佳政策; 我们的自动组合算法可以自动检测和确定同一的亚人口群,同时估计每个亚人口组的Q功能和最佳政策; 我们为非加太国家和加太国家获得的估测者制定趋同率和信任期。 我们提出模拟,以支持我们的理论结论,我们就某一政策类别的最佳政策进行实证研究; 我们的MIIC-III标准数据集法分析显示我们新的价值。