Active learning can reduce the number of samples needed to perform a hypothesis test and to estimate the parameters of a model. In this paper, we revisit the work of Chernoff that described an asymptotically optimal algorithm for performing a hypothesis test. We obtain a novel sample complexity bound for Chernoff's algorithm, with a non-asymptotic term that characterizes its performance at a fixed confidence level. We also develop an extension of Chernoff sampling that can be used to estimate the parameters of a wide variety of models and we obtain a non-asymptotic bound on the estimation error. We apply our extension of Chernoff sampling to actively learn neural network models and to estimate parameters in real-data linear and non-linear regression problems, where our approach performs favorably to state-of-the-art methods.
翻译:积极学习可以减少进行假设测试和估计模型参数所需的样本数量。 在本文中,我们重新审视切尔诺夫的工作,它描述了进行假设测试的无症状最佳算法。我们获得了切诺夫算法的新样本复杂度,这是切诺夫算法的一个非症状术语,其性能具有固定的自信水平。我们还开发了切诺夫抽样的延伸,可用于估计各种模型的参数,我们获得了非症状的估算误差。我们运用切尔诺夫抽样的延伸,积极学习神经网络模型,并估算真实数据线性和非线性回归问题的参数,我们的方法优于最新方法。