This study presents a design science blueprint for an orchestrated AI assistant and co-pilot in doctoral supervision that acts as a socio-technical mediator. Design requirements are derived from Stakeholder Theory and bounded by Academic Integrity. We consolidated recent evidence on supervision gaps and student wellbeing, then mapped issues to adjacent large language model capabilities using a transparent severity-mitigability triage. The artefact assembles existing capabilities into one accountable agentic AI workflow that proposes retrieval-augmented generation and temporal knowledge graphs, as well as mixture-of-experts routing as a solution stack of technologies to address existing doctoral supervision pain points. Additionally, a student context store is proposed, which introduces behaviour patches that turn tacit guidance into auditable practice and student-set thresholds that trigger progress summaries, while keeping authorship and final judgement with people. We specify a student-initiated moderation loop in which assistant outputs are routed to a supervisor for review and patching, and we analyse a reconfigured stakeholder ecosystem that makes information explicit and accountable. Risks in such a system exist, and among others, include AI over-reliance and the potential for the illusion of learning, while guardrails are proposed. The contribution is an ex ante, literature-grounded design with workflow and governance rules that institutions can implement and trial across disciplines.
翻译:暂无翻译