Recent success in fine-tuning large models, that are pretrained on broad data at scale, on downstream tasks has led to a significant paradigm shift in deep learning, from task-centric model design to task-agnostic representation learning and task-specific fine-tuning. As the representations of pretrained models are used as a foundation for different downstream tasks, this paper proposes a new task-agnostic framework, \textit{SynBench}, to measure the quality of pretrained representations using synthetic data. We set up a reference by a theoretically-derived robustness-accuracy tradeoff of the class conditional Gaussian mixture. Given a pretrained model, the representations of data synthesized from the Gaussian mixture are used to compare with our reference to infer the quality.By comparing the ratio of area-under-curve between the raw data and their representations, SynBench offers a quantifiable score for robustness-accuracy performance benchmarking. Our framework applies to a wide range of pretrained models taking continuous data inputs and is independent of the downstream tasks and datasets. Evaluated with several pretrained vision transformer models, the experimental results show that our SynBench score well matches the actual linear probing performance of the pre-trained model when fine-tuned on downstream tasks. Moreover, our framework can be used to inform the design of robust linear probing on pretrained representations to mitigate the robustness-accuracy tradeoff in downstream tasks.
翻译:在对大型模型进行微调方面最近取得成功,这些模型先于大规模的广泛数据,然后是下游任务,这些模型在深层次学习方面,从以任务为中心的模型设计,到以任务为中心的代表性学习和具体任务的微调,都取得了显著的范式转变。由于将预先培训模型的表述作为不同下游任务的基础,本文件建议采用一个新的任务 -- -- 保密框架,\ textit{Synbench},以衡量使用合成数据进行预先培训的演示的质量。我们以具有条件的高斯混合物的理论性强性 -- -- 准确性权衡为参照。鉴于一种预先培训的模式,从高斯混合体中合成的数据的表述被用来与我们用来推断质量的参考进行比较。通过比较原始数据及其表述之间的地区内置比,SynBench提供了一种衡量稳健性和准确性业绩基准的量化分数。我们的框架适用于一系列具有持续数据投入的、独立于下游任务和数据集的预先培训模式。在进行若干预先培训的愿景变现的变现性模型评估时,从高斯混合混合混合的模型中得出了我们实际设计结果。