This paper discusses our proposal and implementation of Cognac, a domain-specific compilation tool based on LLVM to accelerate cognitive models. Cognitive models explain the process of cognitive function and offer a path to human-like artificial intelligence. However, cognitive modeling is laborious, requiring composition of many types of computational tasks, and suffers from poor performance as it relies on high-level languages like Python. In order to continue enjoying the flexibility of Python while achieving high performance, Cognac uses domain-specific knowledge to compile Python-based cognitive models into LLVM IR, carefully stripping away features like dynamic typing and memory management that add overheads to the actual model. As we show, this permits significantly faster model execution. We also show that the code so generated enables using classical compiler data flow analysis passes to reveal properties about data flow in cognitive models that are useful to cognitive scientists. Cognac is publicly available, is being used by researchers in cognitive science, and has led to patches that are currently being evaluated for integration into mainline LLVM.
翻译:本文讨论我们的提案和实施基于LLVM的域名汇编工具Cognac, 以LLVM为基础, 加速认知模型。 认知模型解释认知功能的过程, 并提供了通往像人类一样的人工智能的途径。 然而, 认知模型是艰巨的, 需要多种计算任务的组成, 并且由于依赖像Python这样的高层次语言, 工作表现不佳。 为了继续享受Python的灵活性, 同时取得高性能, Cognac 利用域名知识将基于Python的认知模型编集成LLLVM IR, 仔细去除动态打字和记忆管理等功能, 从而将间接费用添加到实际模型中。 正如我们所显示的那样, 这可以大大加快模型执行速度。 我们还表明, 生成的代码能够使用古典编译数据流分析通道, 以揭示有助于认知科学家的认知模型中的数据流属性。 Cognac 被公开使用, 研究人员在认知科学中正在使用, 并导致目前正在被评估的补接合为LLVM主线的补。