Although disentangled representations are often said to be beneficial for downstream tasks, current empirical and theoretical understanding is limited. In this work, we provide evidence that disentangled representations coupled with sparse base-predictors improve generalization. In the context of multi-task learning, we prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations. Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem. Finally, we explore a meta-learning version of this algorithm based on group Lasso multiclass SVM base-predictors, for which we derive a tractable dual formulation. It obtains competitive results on standard few-shot classification benchmarks, while each task is using only a fraction of the learned representations.
翻译:虽然人们常说解开的表述方式有助于下游任务,但目前的经验和理论理解是有限的。在这项工作中,我们提供了证据,证明解开的表述方式加上稀少的基点分配因素,可以改进一般化。在多任务学习方面,我们证明一种新的可识别性结果,提供了使最稀少的基点分配因素产生分解的表述方式的条件。受这一理论结果的驱使,我们提出了一种切实可行的方法,以学习基于简单、促进双级优化问题的解开的表述方式。最后,我们探索了基于Lasso多级SVM基点组合的这一算法的元学习版本,为此我们形成了一种可移植的双重配方。它根据标准的几分位分类基准取得了竞争性结果,而每项任务只使用部分学到的表述方式。