With the rising number of machine learning competitions, the world has witnessed an exciting race for the best algorithms. However, the involved data selection process may fundamentally suffer from evidence ambiguity and concept drift issues, thereby possibly leading to deleterious effects on the performance of various models. This paper proposes a new Reinforced Data Sampling (RDS) method to learn how to sample data adequately on the search for useful models and insights. We formulate the optimisation problem of model diversification $\delta{-div}$ in data sampling to maximise learning potentials and optimum allocation by injecting model diversity. This work advocates the employment of diverse base learners as value functions such as neural networks, decision trees, or logistic regressions to reinforce the selection process of data subsets with multi-modal belief. We introduce different ensemble reward mechanisms, including soft voting and stochastic choice to approximate optimal sampling policy. The evaluation conducted on four datasets evidently highlights the benefits of using RDS method over traditional sampling approaches. Our experimental results suggest that the trainable sampling for model diversification is useful for competition organisers, researchers, or even starters to pursue full potentials of various machine learning tasks such as classification and regression. The source code is available at https://github.com/probeu/RDS.
翻译:随着机器学习竞赛数量的不断增加,世界目睹了争取最佳算法的激动人心的竞赛,然而,所涉数据选择过程可能从根本上受到证据模糊和概念漂移问题的影响,从而可能对各种模型的性能产生有害影响。本文件建议采用新的强化数据抽样(RDS)方法,在寻找有用的模型和洞察力时,学习如何对数据进行充分抽样;我们在数据取样方面设计了数据多样化模式的优化问题,以最大限度地发挥学习潜力和通过注射模型多样性进行最佳分配。这项工作倡导使用各种基础学习者,如神经网络、决策树或后勤回归等价值功能,以加强多模式信仰数据子集的筛选过程。我们采用了不同的共性奖励机制,包括软投票和选择,以近似最佳抽样政策。我们对四个数据集进行的评估明显强调了使用RDS方法而不是传统取样方法的好处。我们的实验结果表明,为模型多样化进行的可培训抽样对于竞争组织者、研究人员、甚至开源者都有用,以寻求全面回归。 各种机器源的分类和排序是各种机器源的。