Target Propagation (TP) is a biologically more plausible algorithm than the error backpropagation (BP) to train deep networks, and improving practicality of TP is an open issue. TP methods require the feedforward and feedback networks to form layer-wise autoencoders for propagating the target values generated at the output layer. However, this causes certain drawbacks; e.g., careful hyperparameter tuning is required to synchronize the feedforward and feedback training, and frequent updates of the feedback path are usually required than that of the feedforward path. Learning of the feedforward and feedback networks is sufficient to make TP methods capable of training, but is having these layer-wise autoencoders a necessary condition for TP to work? We answer this question by presenting Fixed-Weight Difference Target Propagation (FW-DTP) that keeps the feedback weights constant during training. We confirmed that this simple method, which naturally resolves the abovementioned problems of TP, can still deliver informative target values to hidden layers for a given task; indeed, FW-DTP consistently achieves higher test performance than a baseline, the Difference Target Propagation (DTP), on four classification datasets. We also present a novel propagation architecture that explains the exact form of the feedback function of DTP to analyze FW-DTP.
翻译:目标推进(TP)是一种比错误回向转换(BP)更合理的算法,用来培训深层网络,提高TP的实用性是一个尚未解决的问题。 TP方法要求反馈和反馈网络形成分层自动代算器,以传播产出层产生的目标值。 然而,这造成某些缺点; 例如,需要仔细超参数调,以同步反馈和反馈培训,通常需要经常更新反馈路径,而不是反馈路径。 学习反馈和反馈网络足以使TP方法能够培训,但使这些分层自动编码器成为TP工作的必要条件? 我们通过展示固定的“视觉差异目标推进(FW-DTP)”,使反馈权重在培训期间保持不变。 我们确认,这种自然解决TP的上述问题的简单方法仍然可以将信息性目标值传送到一个隐藏的层次; 事实上, FW-DTP 和反馈网络的学习足以使TP 能够使TP 方法能够使TP 方法具有培训能力, 但这些分层自动生成的自动编码器是TP 工作的一个必要条件? 我们通过显示固定的“视觉差异”目标促进(FW-DTP) 不断地将数据升级格式转换为一种比标准。