Koch, Strassle, and Tan [SODA 2023], show that, under the randomized exponential time hypothesis, there is no distribution-free PAC-learning algorithm that runs in time $n^{\tilde O(\log\log s)}$ for the classes of $n$-variable size-$s$ DNF, size-$s$ Decision Tree, and $\log s$-Junta by DNF (that returns a DNF hypothesis). Assuming a natural conjecture on the hardness of set cover, they give the lower bound $n^{\Omega(\log s)}$. This matches the best known upper bound for $n$-variable size-$s$ Decision Tree, and $\log s$-Junta. In this paper, we give the same lower bounds for PAC-learning of $n$-variable size-$s$ Monotone DNF, size-$s$ Monotone Decision Tree, and Monotone $\log s$-Junta by~DNF. This solves the open problem proposed by Koch, Strassle, and Tan and subsumes the above results. The lower bound holds, even if the learner knows the distribution, can draw a sample according to the distribution in polynomial time, and can compute the target function on all the points of the support of the distribution in polynomial time.
翻译:Koch、 Strassle 和 Tan [SODA 2023] 显示,根据随机的指数时间假设,在随机化的指数时间假设下,对于以美元为可变大小-美元DNF、大小-美元决定树和以美元为单位的DNF(返回DNF假设)为单位的可变大小-美元和以美元为单位的美元-Junta(返回DNF假设)的类别,没有免费的PAC学习算法。假设设置封面的硬性自然推测,它们给出较低的约束 $ ⁇ Omega(logs) $。这符合已知的美元可变大小-美元决定树和美元(logs) 的最佳上限。 在本文中,我们给 PAC 学习以美元为单位的可变大小- Monotone DNF, 大小- 美元为单位的Monotone 决定树, 和 Moonotone $- Junta 以美元为单位 。 这符合已知的上限的上限, 如果 Koch、 Straclem 样号的分布函数可以学习所有的开放时间, 和排序的分布结果, 将保存所有的版本的版本的版本的分布, 将可以保存。