Agentic AI systems capable of generating full-stack web applications from natural language prompts ("prompt- to-app") represent a significant shift in software development. However, evaluating these systems remains challenging, as visual polish, functional correctness, and user trust are often misaligned. As a result, it is unclear how existing prompt-to-app tools compare under realistic, human-centered evaluation criteria. In this paper, we introduce a human-centered benchmark for evaluating prompt-to-app systems and conduct a large-scale comparative study of three widely used platforms: Replit, Bolt, and Firebase Studio. Using a diverse set of 96 prompts spanning common web application tasks, we generate 288 unique application artifacts. We evaluate these systems through a large-scale human-rater study involving 205 participants and 1,071 quality-filtered pairwise comparisons, assessing task-based ease of use, visual appeal, perceived completeness, and user trust. Our results show that these systems are not interchangeable: Firebase Studio consistently outperforms competing platforms across all human-evaluated dimensions, achieving the highest win rates for ease of use, trust, visual appeal, and visual appropriateness. Bolt performs competitively on visual appeal but trails Firebase on usability and trust, while Replit underperforms relative to both across most metrics. These findings highlight a persistent gap between visual polish and functional reliability in prompt-to-app systems and demonstrate the necessity of interactive, task-based evaluation. We release our benchmark framework, prompt set, and generated artifacts to support reproducible evaluation and future research in agentic application generation.


翻译:能够根据自然语言提示生成全栈网络应用的智能体化人工智能系统(即“提示到应用”系统)代表了软件开发范式的重大转变。然而,评估此类系统仍面临挑战,因为视觉呈现、功能正确性与用户信任度之间常存在错位。因此,现有提示到应用工具在真实人本评估标准下的性能对比尚不明确。本文提出一种面向人本评估的提示到应用系统评测基准,并对三种广泛使用的平台——Replit、Bolt 和 Firebase Studio——开展了大规模对比研究。通过涵盖常见网络应用任务的 96 个多样化提示集,我们生成了 288 个独立的应用制品。我们组织了一项涉及 205 名参与者、包含 1,071 组经质量筛选的成对比较的大规模人工评分研究,从任务导向的易用性、视觉吸引力、感知完整度及用户信任度四个维度对这些系统进行评估。研究结果表明这些系统不可相互替代:Firebase Studio 在所有人工评估维度上持续优于竞争平台,在易用性、信任度、视觉吸引力与视觉适配性方面均取得最高胜率;Bolt 在视觉吸引力方面表现尚可,但在可用性与信任度上落后于 Firebase;而 Replit 在多数指标上均逊于前两者。这些发现揭示了提示到应用系统中视觉呈现与功能可靠性之间持续存在的差距,并论证了交互式、任务导向评估的必要性。我们公开了本研究的基准框架、提示集及生成制品,以支持智能体化应用生成领域的可复现评估与未来研究。

0
下载
关闭预览

相关内容

Top
微信扫码咨询专知VIP会员