CCF自然语言处理与中文计算国际会议(NLPCC)是中国计算机联合会中文信息技术委员会(CCF-TCCI)的年会。NLPCC是一个在自然语言处理(NLP)和中文计算(CC)领域领先的国际会议。它是学术界、工业界和政府的研究人员和实践者分享他们的想法、研究成果和经验,并促进他们在该领域的研究和技术创新的主要论坛。官网链接:http://tcci.ccf.org.cn/conference/2019/

VIP内容

如今,网络越来越大,越来越复杂,应用越来越广泛。众所周知,网络数据是复杂和具有挑战性的。要有效地处理图数据,第一个关键的挑战是网络数据表示,即如何正确地表示网络,使模式发现、分析和预测等高级分析任务在时间和空间上都能有效地进行。在这次演讲中,我将介绍网络嵌入和GCN的最新发展趋势和最新进展,包括解纠缠GCN、抗攻击GCN以及用于网络嵌入的自动机器学习。

http://tcci.ccf.org.cn/conference/2020/dldoc/tutorial_3.pdf

成为VIP会员查看完整内容
0
54

最新论文

Pretrained Language Models (PLMs) have achieved tremendous success in natural language understanding tasks. While different learning schemes -- fine-tuning, zero-shot, and few-shot learning -- have been widely explored and compared for languages such as English, there is comparatively little work in Chinese to fairly and comprehensively evaluate and compare these methods and thus hinders cumulative progress. In this paper, we introduce the Chinese Few-shot Learning Evaluation Benchmark (FewCLUE), the first comprehensive few-shot evaluation benchmark in Chinese. It includes nine tasks, ranging from single-sentence and sentence-pair classification tasks to machine reading comprehension tasks. We systematically evaluate five state-of-the-art (SOTA) few-shot learning methods (including PET, ADAPET, LM-BFF, P-tuning and EFL), and compare their performance with fine-tuning and zero-shot learning schemes on the newly constructed FewCLUE benchmark. Experimental results reveal that: 1) The effect of different few-shot learning methods is sensitive to the pre-trained model to which the methods are applied; 2) PET and P-tuning achieve the best overall performance with RoBERTa and ERNIE respectively. Our benchmark is used in the few-shot learning contest of NLPCC 2021. In addition, we provide a user-friendly toolkit, as well as an online leaderboard to help facilitate further progress on Chinese few-shot learning. We provide a baseline performance on different learning methods, a reference for future research.

0
0
下载
预览
参考链接
Top