Vision impairment and blindness are a major global health challenge where gaps in the ophthalmology workforce limit access to specialist care. We evaluate AMIE, a medically fine-tuned conversational system based on Gemini with integrated web search and self-critique reasoning, using real-world clinical vignettes that reflect scenarios a general ophthalmologist would be expected to manage. We conducted two complementary evaluations: (1) a human-AI interactive diagnostic reasoning study in which ophthalmologists recorded initial differentials and plans, then reviewed AMIE's structured output and revised their answers; and (2) a masked preference and quality study comparing AMIE's narrative outputs with case author reference answers using a predefined rubric. AMIE showed standalone diagnostic performance comparable to clinicians at baseline. Crucially, after reviewing AMIE's responses, ophthalmologists tended to rank the correct diagnosis higher, reached greater agreement with one another, and enriched their investigation and management plans. Improvements were observed even when AMIE's top choice differed from or underperformed the clinician baseline, consistent with a complementary effect in which structured reasoning support helps clinicians re-rank rather than simply accept the model output. Preferences varied by clinical grade, suggesting opportunities to personalise responses by experience. Without ophthalmology-specific fine-tuning, AMIE matched clinician baseline and augmented clinical reasoning at the point of need, motivating multi-axis evaluation, domain adaptation, and prospective multimodal studies in real-world settings.
翻译:暂无翻译