Special Section: Transferable neural models for language understanding
Language understanding, dealing with machine reading comprehension in various forms such as question answering, machine translation and language dialog, has been an aspiration of the artificial intelligence community, but has limited success until recently. Due to the success of deep neural networks, there is a resurgence of interest in research on deep neural networks applied to language understanding. The most recent research in language understanding aims to build deep neural network models that can be used for various language understanding tasks, such as paraphrasing, question answering, machine translation, spoken dialog, and text categorization. However, these models are (1) very data hungry – requiring large training data; (2) very task specific – hard to generalize the model for one task to other related tasks. To solve these problems, recently, transfer learning has been applied to language understanding. Transfer learning is a learning paradigm that aims to apply knowledge gained while solving one problem to a different but related problem. Transfer learning builds a neural model for one language understanding task with large training data, and then the model is retrained for another task with small training data.
IJCNN 2019 会议论文截止日期为 2018 年 12 月 15 日,录用通知日期为 2019 年 1 月 30 日。
征稿主题包括但不限于:自然语言理解,推理和生成,深度学习,迁移学习,主动学习,自我学习,领域适应学习,序列对序列学习,机器翻译,复述,问答系统,信息抽取等。
在线投稿:
https://ieee-cis.org/conferences/ijcnn2019/upload.php
请选择 Main research topic 为 S33 — Transferable neural models for language understanding。
若想进一步了解,请联系 Dr. Zhiwei Lin(z.lin@ulster.ac.uk)。
🔍
现在,在「知乎」也能找到我们了
进入知乎首页搜索「PaperWeekly」
点击「关注」订阅我们的专栏吧
关于PaperWeekly
PaperWeekly 是一个推荐、解读、讨论、报道人工智能前沿论文成果的学术平台。如果你研究或从事 AI 领域,欢迎在公众号后台点击「交流群」,小助手将把你带入 PaperWeekly 的交流群里。
▽ 点击 | 阅读原文 | 在线投稿