Pretraining methods are typically compared by evaluating the accuracy of linear classifiers, transfer learning performance, or visually inspecting the representation manifold's (RM) lower-dimensional projections. We show that the differences between methods can be understood more clearly by investigating the RM directly, which allows for a more detailed comparison. To this end, we propose a framework and new metric to measure and compare different RMs. We also investigate and report on the RM characteristics for various pretraining methods. These characteristics are measured by applying sequentially larger local alterations to the input data, using white noise injections and Projected Gradient Descent (PGD) adversarial attacks, and then tracking each datapoint. We calculate the total distance moved for each datapoint and the relative change in distance between successive alterations. We show that self-supervised methods learn an RM where alterations lead to large but constant size changes, indicating a smoother RM than fully supervised methods. We then combine these measurements into one metric, the Representation Manifold Quality Metric (RMQM), where larger values indicate larger and less variable step sizes, and show that RMQM correlates positively with performance on downstream tasks.
翻译:培训前方法通常通过评价线性分类器的准确性、转移学习成绩或直观检查代表器(RM)的低维预测来进行比较。我们表明,通过直接调查RM,可以更清楚地理解方法之间的差异,这样可以进行更详细的比较。为此,我们提出一个框架和新的衡量和比较标准,以衡量和比较不同的RMs。我们还调查并报告各种培训前方法的RM特性。这些特点是通过对输入数据按顺序对本地进行较大的修改来衡量的,使用白色噪音注射和预测的渐变源(PGD)对抗性攻击,然后跟踪每个数据点。我们计算了每个数据点移动的总距离以及相继改变之间的相对变化。我们表明,自我监督的方法学习RM,而改变会导致大但不变的大小变化。我们同时将这些测量结果合并为一个尺度,即代表制质量(RMQMM),其中较大的值显示较大和较少变化的步势大小,并显示RMQM与下游任务的业绩是积极的。