We present a novel corpus for French dialect identification comprising 413,522 French text samples collected from public news websites in Belgium, Canada, France and Switzerland. To ensure an accurate estimation of the dialect identification performance of models, we designed the corpus to eliminate potential biases related to topic, writing style, and publication source. More precisely, the training, validation and test splits are collected from different news websites, while searching for different keywords (topics). This leads to a French cross-domain (FreCDo) dialect identification task. We conduct experiments with four competitive baselines, a fine-tuned CamemBERT model, an XGBoost based on fine-tuned CamemBERT features, a Support Vector Machines (SVM) classifier based on fine-tuned CamemBERT features, and an SVM based on word n-grams. Aside from presenting quantitative results, we also make an analysis of the most discriminative features learned by CamemBERT. Our corpus is available at https://github.com/MihaelaGaman/FreCDo.
翻译:为确保准确估计模型的方言识别性能,我们设计了该模型,以消除与主题、写作风格和出版来源有关的潜在偏见;更准确地说,培训、验证和测试分解是从不同新闻网站收集的,同时搜索不同的关键词(专题);这导致法国跨域(FreCDo)方言识别任务。我们用四个竞争性基线、一个经过精细调整的CamembERT模型、一个基于精细调整的CamembERT特征的XGBOst、一个基于精细调整的CamemBERT特征的支持VM机(SVM)分类器以及一个基于n-grames的SVM(SVM)进行实验。除了提供定量结果外,我们还对CamemBERT所学到的最有歧视的特征进行了分析。我们的资料可在https://github.com/mihaelaGaman/FreCDo查阅。