Autonomous coding agents built on large language models (LLMs) can now solve many general software and machine learning tasks, but they remain ineffective on complex, domain-specific scientific problems. Medical imaging is a particularly demanding domain, requiring long training cycles, high-dimensional data handling, and specialized preprocessing and validation pipelines, capabilities not fully measured in existing agent benchmarks. To address this gap, we introduce ReX-MLE, a benchmark of 20 challenges derived from high-impact medical imaging competitions spanning diverse modalities and task types. Unlike prior ML-agent benchmarks, ReX-MLE evaluates full end-to-end workflows, requiring agents to independently manage data preprocessing, model training, and submission under realistic compute and time constraints. Evaluating state-of-the-art agents (AIDE, ML-Master, R&D-Agent) with different LLM backends (GPT-5, Gemini, Claude), we observe a severe performance gap: most submissions rank in the 0th percentile compared to human experts. Failures stem from domain-knowledge and engineering limitations. ReX-MLE exposes these bottlenecks and provides a foundation for developing domain-aware autonomous AI systems.
翻译:基于大语言模型构建的自主编码智能体现在已能解决许多通用软件与机器学习任务,但在复杂、领域特定的科学问题上仍显不足。医学影像是一个要求极高的领域,需要长周期的训练、高维数据处理以及专门的预处理与验证流程,这些能力在现有智能体基准中尚未得到充分衡量。为填补这一空白,我们提出了ReX-MLE——一个包含20项挑战的基准,这些挑战源自具有高影响力的医学影像竞赛,涵盖多种成像模态与任务类型。与先前的机器学习智能体基准不同,ReX-MLE评估完整的端到端工作流程,要求智能体在现实的计算与时间限制下独立管理数据预处理、模型训练与结果提交。通过对采用不同大语言模型后端(GPT-5、Gemini、Claude)的先进智能体(AIDE、ML-Master、R&D-Agent)进行评估,我们观察到一个显著的性能差距:与人类专家相比,大多数提交结果处于第0百分位。失败原因主要源于领域知识与工程实践能力的局限。ReX-MLE揭示了这些瓶颈,并为开发具备领域认知的自主人工智能系统奠定了基础。