Enterprises need access decisions that satisfy least privilege, comply with regulations, and remain auditable. We present a policy aware controller that uses a large language model (LLM) to interpret natural language requests against written policies and metadata, not raw data. The system, implemented with Google Gemini~2.0 Flash, executes a six-stage reasoning framework (context interpretation, user validation, data classification, business purpose test, compliance mapping, and risk synthesis) with early hard policy gates and deny by default. It returns APPROVE, DENY, CONDITIONAL together with cited controls and a machine readable rationale. We evaluate on fourteen canonical cases across seven scenario families using a privacy preserving benchmark. Results show Exact Decision Match improving from 10/14 to 13/14 (92.9\%) after applying policy gates, DENY recall rising to 1.00, False Approval Rate on must-deny families dropping to 0, and Functional Appropriateness and Compliance Adherence at 14/14. Expert ratings of rationale quality are high, and median latency is under one minute. These findings indicate that policy constrained LLM reasoning, combined with explicit gates and audit trails, can translate human readable policies into safe, compliant, and traceable machine decisions.
翻译:企业需要满足最小权限原则、符合法规要求且保持可审计性的访问决策。本文提出一种策略感知控制器,它利用大语言模型(LLM)依据书面策略和元数据(而非原始数据)来解读自然语言请求。该系统基于Google Gemini~2.0 Flash实现,执行一个六阶段推理框架(上下文解读、用户验证、数据分类、业务目的测试、合规性映射和风险综合),采用早期硬性策略关卡和默认拒绝机制。系统返回 APPROVE(批准)、DENY(拒绝)、CONDITIONAL(有条件批准)三种决策,并附引用的控制措施及机器可读的决策依据。我们在一个隐私保护基准上,对涵盖七类场景的十四个典型案例进行了评估。结果显示:应用策略关卡后,精确决策匹配率从10/14提升至13/14(92.9%);DENY召回率升至1.00;必须拒绝场景的误批准率降至0;功能适当性与合规遵循率均为14/14。专家对决策依据质量的评分较高,中位延迟时间在一分钟以内。这些发现表明,结合显式关卡和审计追踪的策略约束LLM推理,能够将人类可读的策略转化为安全、合规且可追溯的机器决策。