We present an approach where two different models (Deep and Shallow) are trained separately on the data and a weighted average of the outputs is taken as the final result. For the Deep approach, we use different combinations of models like Convolution Neural Network, pretrained word2vec embeddings and LSTMs to get representations which are then used to train a Deep Neural Network. For Clarity prediction, we also use an Attentive Pooling approach for the pooling operation so as to be aware of the Title-Category pair. For the shallow approach, we use boosting technique LightGBM on features generated using title and categories. We find that an ensemble of these approaches does a better job than using them alone suggesting that the results of the deep and shallow approach are highly complementary
翻译:我们提出一种方法,即两种不同的模型(深海和浅浅水)分别就数据进行训练,并将产出的加权平均值作为最后结果。对于深层方法,我们使用不同的组合模型,如革命神经网络、预先训练的字2vec嵌入器和LSTMs等模型来进行演示,然后用来训练深神经网络。对于清晰预测,我们还对联合作业采用“强化共享”方法,以便了解标题-分类对。对于浅层方法,我们使用“推进技术光GBM”来测量使用标题和类别生成的特征。我们发现,这些方法的组合比仅仅使用它们来表明深海和浅层方法的结果是高度互补的更好。