With infinitely many high-quality data points, infinite computational power, an infinitely large foundation model with a perfect training algorithm and guaranteed zero generalization error on the pretext task, can the model be used for everything? This question cannot be answered by the existing theory of representation, optimization or generalization, because the issues they mainly investigate are assumed to be nonexistent here. In this paper, we show that category theory provides powerful machinery to answer this question. We have proved three results. The first one limits the power of prompt-based learning, saying that the model can solve a downstream task with prompts if and only if the task is representable. The second one says fine tuning does not have this limit, as a foundation model with the minimum power (up to symmetry) can theoretically solve downstream tasks with fine tuning and enough resources. Our final result can be seen as a new type of generalization theorem, showing that the foundation model can generate unseen objects from the target category (e.g., images) using the structural information from the source category (e.g., texts). Along the way, we provide a categorical framework for supervised and self-supervised learning, which might be of independent interest.
翻译:有了无数高质量的数据点, 无限的计算力, 无限的计算力, 一个无限大的基础模型, 具有完美的培训算法, 并保证在托辞任务上没有普遍化错误, 模型能适用于一切吗? 这个问题无法用现有的代表、 优化或概括理论来解答? 因为他们主要调查的问题假定在这里不存在。 在本文中, 我们展示了分类理论为回答这个问题提供了强大的机制。 我们已证明了三个结果。 第一个结果限制了基于即时学习的力量。 第一个结果限制了基于即时学习的能力, 指出该模型可以使用源类的结构信息( 如文本) 解决下游任务, 如果并且只有在能够代表任务的情况下, 。 第二个模型说微调没有这个限制, 作为具有最小权力( 至对称性) 的基础模型, 理论上可以用微调和足够资源来解决下游任务。 我们的最终结果可以被视为一种新型的概括性图案, 显示基础模型可以从目标类别( 如图像) 产生不可见的物体,, 并且使用源类的结构信息( 例如, 文本) 。 。 。 顺便说, 我们提供了一个直截然的框架框架, 。