In this preprint, we present A collaborative human-AI approach to building an inspectable semantic layer for Agentic AI. AI agents first propose candidate knowledge structures from diverse data sources; domain experts then validate, correct, and extend these structures, with their feedback used to improve subsequent models. Authors show how this process captures tacit institutional knowledge, improves response quality and efficiency, and mitigates institutional amnesia. We argue for a shift from post-hoc explanation to justifiable Agentic AI, where decisions are grounded in explicit, inspectable evidence and reasoning accessible to both experts and non-specialists.
翻译:本预印本提出一种人机协作方法,用于为自主智能体构建可审查的语义层。人工智能体首先从多元数据源中提出候选知识结构;随后领域专家对这些结构进行验证、修正与扩展,其反馈被用于改进后续模型。作者展示了该过程如何捕获隐性机构知识、提升响应质量与效率,并缓解机构失忆现象。我们主张从后验解释转向可论证的自主智能体,使决策基于明确、可审查的证据与推理过程,且同时面向专家与非专业人士开放。