Given a dataset on actions and resulting long-term rewards, a direct estimation approach fits value functions that minimize prediction error on the training data. Temporal difference learning (TD) methods instead fit value functions by minimizing the degree of temporal inconsistency between estimates made at successive time-steps. Focusing on finite state Markov chains, we provide a crisp asymptotic theory of the statistical advantages of this approach. First, we show that an intuitive inverse trajectory pooling coefficient completely characterizes the percent reduction in mean-squared error of value estimates. Depending on problem structure, the reduction could be enormous or nonexistent. Next, we prove that there can be dramatic improvements in estimates of the difference in value-to-go for two states: TD's errors are bounded in terms of a novel measure - the problem's trajectory crossing time - which can be much smaller than the problem's time horizon.
翻译:鉴于关于行动和由此产生的长期奖励的数据集,直接估算方法符合价值功能,可以最大限度地减少培训数据的预测误差。时间差异学习(TD)方法适合价值功能,可以最大限度地减少连续时间步骤所作估计之间的时间不一致程度。我们侧重于有限的Markov 链条,对这种方法的统计优势提供了精确的零疑理论。首先,我们表明,直观的逆向轨道集合系数是数值平均差错百分比下降的完全特征。根据问题结构,这种减少可能是巨大的,也可能是不存在的。接下来,我们证明,对两个州价值到go值差异的估计可以大有改进:TD的误差被一种新措施――问题轨道跨度时间――约束在比问题时间范围小得多的新措施――问题轨道跨度时间――的范围。