As generative AI diffuses through academia, policy-practice divergence becomes consequential, creating demand for auditable indicators of alignment. This study prototypes a ten-item, indirect-elicitation instrument embedded in a structured interpretive framework to surface voids between institutional rules and practitioner AI use. The framework extracts empirical and epistemic signals from academics, yielding three filtered indicators of such voids: (1) AI-integrated assessment capacity (proxy) - within a three-signal screen (AI skill, perceived teaching benefit, detection confidence), the share who would fully allow AI in exams; (2) sector-level necessity (proxy) - among high output control users who still credit AI with high contribution, the proportion who judge AI capable of challenging established disciplines; and (3) ontological stance - among respondents who judge AI different in kind from prior tools, report practice change, and pass a metacognition gate, the split between material and immaterial views as an ontological map aligning procurement claims with evidence classes.
翻译:随着生成式人工智能在学术界的普及,政策与实践之间的分歧日益凸显,亟需可审计的校准指标。本研究开发了一套包含十项间接诱导题项的工具,并将其嵌入结构化解释框架,以揭示机构规则与从业者人工智能使用之间的脱节。该框架从学者群体中提取实证与认知信号,生成三类过滤后的脱节指标:(1)人工智能整合评估能力(代理指标)——在包含人工智能技能、感知教学效益、检测信心的三信号筛选中,完全允许人工智能参与考试的学者比例;(2)领域层面必要性(代理指标)——在高产出控制用户中,仍认为人工智能贡献显著且判断其能挑战既有学科体系的学者占比;(3)本体论立场——在认为人工智能本质不同于传统工具、报告实践改变并通过元认知门槛的受访者中,物质观与非物质观的分布比例,该本体论图谱可将采购主张与证据类别进行校准映射。