Artificial Intelligence (AI), along with the recent progress in biomedical language understanding, is gradually changing medical practice. With the development of biomedical language understanding benchmarks, AI applications are widely used in the medical field. However, most benchmarks are limited to English, which makes it challenging to replicate many of the successes in English for other languages. To facilitate research in this direction, we collect real-world biomedical data and present the first Chinese Biomedical Language Understanding Evaluation (CBLUE) benchmark: a collection of natural language understanding tasks including named entity recognition, information extraction, clinical diagnosis normalization, single-sentence/sentence-pair classification, and an associated online platform for model evaluation, comparison, and analysis. To establish evaluation on these tasks, we report empirical results with the current 11 pre-trained Chinese models, and experimental results show that state-of-the-art neural models perform by far worse than the human ceiling. Our benchmark is released at \url{https://tianchi.aliyun.com/dataset/dataDetail?dataId=95414&lang=en-us}.
翻译:随着医学领域生物医学语言理解基准的制定,人工智能应用被广泛用于医学领域,但大多数基准仅限于英语,因此推广许多英文成功经验具有挑战性。为了便利这方面的研究,我们收集了现实世界生物医学数据,并提出了第一个中国生物医学语言理解评估基准:收集了自然语言理解任务,包括实体识别、信息提取、临床诊断正常化、单例判决/sentence-pair分类,以及一个相关的模型评估、比较和分析在线平台。为了确定这些任务的评价,我们报告与目前11个预先培训的中国模型的经验结果,实验结果表明,最新神经模型比人类天花板要差得多。我们的基准发布在<url{https://tianchi.aliyun.com/dataset/dataDetail?dad=95414&lang=en-us}。