We study the problem of learning Ising models satisfying Dobrushin's condition in the outlier-robust setting where a constant fraction of the samples are adversarially corrupted. Our main result is to provide the first computationally efficient robust learning algorithm for this problem with near-optimal error guarantees. Our algorithm can be seen as a special case of an algorithm for robustly learning a distribution from a general exponential family. To prove its correctness for Ising models, we establish new anti-concentration results for degree-$2$ polynomials of Ising models that may be of independent interest.
翻译:我们研究的是,在外部-大萧条环境中学习满足Dobrushin条件的Ising模型的问题,在这一环境中,样本的固定部分被对抗性地腐蚀。我们的主要结果就是以近乎最佳的错误保证为这一问题提供第一个计算高效的强效学习算法。我们的算法可以被视为一个特殊案例,一种从一般指数式大家庭中强有力地学习分布的算法。为了证明它对于Ising模型的正确性,我们为可能具有独立利益的Ising模型的2美元多价制模型建立了新的抗浓缩结果。