Overparameterized models with millions of parameters have been hugely successful. In this work, we ask: can the need for large models be, at least in part, due to the \emph{computational} limitations of the learner? Additionally, we ask, is this situation exacerbated for \emph{robust} learning? We show that this indeed could be the case. We show learning tasks for which computationally bounded learners need \emph{significantly more} model parameters than what information-theoretic learners need. Furthermore, we show that even more model parameters could be necessary for robust learning. In particular, for computationally bounded learners, we extend the recent result of Bubeck and Sellke [NeurIPS'2021] which shows that robust models might need more parameters, to the computational regime and show that bounded learners could provably need an even larger number of parameters. Then, we address the following related question: can we hope to remedy the situation for robust computationally bounded learning by restricting \emph{adversaries} to also be computationally bounded for sake of obtaining models with fewer parameters? Here again, we show that this could be possible. Specifically, building on the work of Garg, Jha, Mahloujifar, and Mahmoody [ALT'2020], we demonstrate a learning task that can be learned efficiently and robustly against a computationally bounded attacker, while to be robust against an information-theoretic attacker requires the learner to utilize significantly more parameters.
翻译:以百万参数衡量的超度模型非常成功 。 在此工作中, 我们询问 : 大型模型的需要是否至少部分是由于学习者的 emph{ computational 限制? 此外, 我们询问, 这种情况是否对学习者的 emph{ robust] 学习更为严重? 我们表明, 这可能是这样的情况。 我们展示了与计算相联的学习者所需要的模型参数比信息理论学习者需要的要大得多。 此外, 我们显示, 更多的模型参数对于强健学习可能是必要的。 特别是, 对于有计算约束的学习者, 我们是否有必要扩大Bubeck 和 Sellke(NeurIPS' 2021) 的最新结果?? 此外, 我们问, 这种情况是否对计算制度更加严重? 我们显示, 受约束的学习者可能需要更多参数。 然后, 我们处理以下相关的问题: 我们能否通过限制攻击性的研究来纠正强度和强度的 。 我们能否在进行计算时, 还要大量地进行计算, 与 正在学习的模型进行绑定, 。