Disentangled representation learning remains challenging as ground truth factors of variation do not naturally exist. To address this, we present Vocabulary Disentanglement Retrieval~(VDR), a simple yet effective retrieval-based disentanglement framework that leverages nature language as distant supervision. Our approach is built upon the widely-used bi-encoder architecture with disentanglement heads and is trained on data-text pairs that are readily available on the web or in existing datasets. This makes our approach task- and modality-agnostic with potential for a wide range of downstream applications. We conduct experiments on 16 datasets in both text-to-text and cross-modal scenarios and evaluate VDR in a zero-shot setting. With the incorporation of disentanglement heads and a minor increase in parameters, VDR achieves significant improvements over the base retriever it is built upon, with a 9% higher on NDCG@10 scores in zero-shot text-to-text retrieval and an average of 13% higher recall in cross-modal retrieval. In comparison to other baselines, VDR outperforms them in most tasks, while also improving explainability and efficiency.
翻译:分解的演示学习仍然具有挑战性, 因为差异的地面真实因素自然并不存在。 为了解决这个问题, 我们提出词汇分解 Retrequention retalval~ (VDR), 这是一个简单而有效的基于检索的解剖框架, 将自然语言作为遥远的监督手段。 我们的方法建立在广泛使用的双编码结构上, 带有分解头, 并受过关于数据文本对对子的培训, 这些数据对子很容易在网络或现有数据集中找到。 这使得我们的方法任务和模式不可知性具有广泛下游应用的潜力。 我们用文本到文本和跨模式的情景对16个数据集进行实验, 并在零分位环境下对 VDR 进行评估。 随着分解头的整合和参数的微小增加, VDR在基础检索器上取得了显著的改进, 在零发文本到文本检索中的NDCG@ 10分中提高了9%, 在跨模式检索中的平均回回调率为13%。 与其他基线相比, VDR outperorfortis, 同时改进了大部分任务的效率。