Counterfactual Regret Minimization (CFR) has achieved many fascinating results in solving large-scale Imperfect Information Games (IIGs). Neural network approximation CFR (neural CFR) is one of the promising techniques that can reduce computation and memory consumption by generalizing decision information between similar states. Current neural CFR algorithms have to approximate cumulative regrets. However, efficient and accurate approximation in a large-scale IIG is still a tough challenge. In this paper, a new CFR variant, Recursive CFR (ReCFR), is proposed. In ReCFR, Recursive Substitute Values (RSVs) are learned and used to replace cumulative regrets. It is proven that ReCFR can converge to a Nash equilibrium at a rate of $O({1}/{\sqrt{T}})$. Based on ReCFR, a new model-free neural CFR with bootstrap learning, Neural ReCFR-B, is proposed. Due to the recursive and non-cumulative nature of RSVs, Neural ReCFR-B has lower-variance training targets than other neural CFRs. Experimental results show that Neural ReCFR-B is competitive with the state-of-the-art neural CFR algorithms at a much lower training cost.
翻译:在解决大规模低效信息游戏(IIG)方面,反事实最小化(CFR)取得了许多令人着迷的成果。神经网络近似CFR(Neal CFR)是通过推广类似州之间的决策信息而减少计算和记忆消耗的有希望的技术之一。当前的神经CFR算法必须大致累积遗憾。然而,在大规模IGG中,高效和准确接近仍然是一个艰巨的挑战。在本文中,提出了一个新的CFR变方,Recursive CFR(Recurive CFR)(ReCFR),在RecFR(RE)中,Recurs Recurive 替代值(RSV)被学习并用来取代累积的遗憾。事实证明,RECFR可以以美元({1}/sqrt{T ⁇ )的速率趋同纳什平衡。在REFR的基础上,提出了一个新的无型CFRRFR(NR)和NFR(NAFR)下级培训结果显示,在CRFR(CRRR)下调低调,在CRAFRA中,在CRRRRADRRA中,其低调调低的训练是其他目标。