In this work, we investigate the inductive biases that result from learning multiple tasks, either simultaneously (multi-task learning, MTL) or sequentially (pretraining and subsequent finetuning, PT+FT). In the simplified setting of two-layer diagonal linear networks trained with gradient descent, we apply prior theoretical results to describe novel implicit regularization penalties associated with MTL and PT+FT, both of which incentivize feature sharing between tasks and sparsity in learned task-specific features. Notably, these results imply that during finetuning, networks operate in a hybrid of the kernel (or "lazy") regime and the feature learning ("rich") regime identified in prior work. Moreover, we show that PT+FT can exhibit a novel "nested feature selection" behavior not captured by either regime, which biases it to extract a sparse subset of the features learned during pretraining. In ReLU networks, we reproduce all of these qualitative behaviors empirically, in particular verifying that analogues of the sparsity biases predicted by the linear theory hold in the nonlinear case. Our findings hold qualitatively for a deep architecture trained on image classification tasks, and our characterization of the nested feature selection regime motivates a modification to PT+FT that we find empirically improves performance. We also observe that PT+FT (but not MTL) is biased to learn features that are correlated with (but distinct from) those needed for the auxiliary task, while MTL is biased toward using identical features for both tasks, which can lead to a tradeoff in performance as a function of the number of finetuning samples. Our results shed light on the impact of auxiliary task learning and suggest ways to leverage it more effectively.
翻译:暂无翻译