(本文阅读时间:6分钟)
微软亚洲研究院与微软总部联合推出“星跃计划”联合科研计划,人才持续招募中,欢迎大家关注与申请!加入“星跃计划”,和我们一起跨越重洋,探索科研的更多可能。
“星跃计划”计划旨在为优秀人才创造与微软全球总部的研究团队一起聚焦真实前沿问题的机会。你将在国际化的科研环境中、在多元包容的科研氛围中、在顶尖研究员的指导下,做有影响力的研究!
目前在招募的跨研究院联合科研项目为数据智能领域的研究项目:Neuro-Symbolic Semantic Parsing for Data Science。星跃计划的开放项目将持续更新,请及时关注最新动态!
-
同时在微软亚洲研究院、微软雷德蒙研究院顶级研究员的指导下进行科研工作,与不同研究背景的科研人员深度交流
-
聚焦来自于工业界的真实前沿问题,致力于做出对学术及产业界有影响力的成果
-
通过线下与线上的交流合作,在微软的两大研究院了解国际化、开放的科研氛围,及多元与包容的文化
-
硕士、博士在读学生;延期(deferred)或间隔年(gap year)学生
-
-
Neuro-Symbolic Semantic Parsing for Data Science
Our cross-lab, inter-disciplinary research team develops AI technology for interactive coding assistance for data science, data analytics, and business process automation. It allows the user to specify their data processing intent in the middle of their workflow using a combination of natural language, input-output examples, and multi-modal UX – and translates that intent into the desired source code. The underlying AI technology integrates our state-of-the-art research in program synthesis, semantic parsing, and structure-grounded natural language understanding. It has the potential to improve productivity of millions of data scientists and software developers, as well as establish new scientific milestones for deep learning over structured data, grounded language understanding, and neuro-symbolic AI.
The research project involves collecting and establishing a novel benchmark dataset for data science program generation, developing novel neuro-symbolic semantic parsing models to tackle this challenge, adapting large-scale pretrained language models to new domains and knowledge bases, as well as publishing in top-tier AI/NLP conferences. We expect the benchmark dataset and the new models to be used in academia as well as in Microsoft products.
Natural Language Computing, MSR Asia
https://www.microsoft.com/en-us/research/group/natural-language-computing
Neuro-Symbolic Learning, MSR Redmond
Masters or Ph.D. students, majoring in computer science or equivalent areas
Background in deep NLP, semantic parsing, sequence-to-sequence learning, Transformers required
Experience with PyTorch and HuggingFace Transformers
Fluent English speaking, listening, and writing skills
Background in deep learning over structured data (graphs/trees/programs) and program synthesis preferred
Students with papers published at top-tier AI/NLP conferences are preferred
符合条件的申请者请填写下方申请表:
https://jinshuju.net/f/TpaCqG
或扫描下方二维码,立即填写进入申请!
你也许还想看: