Smartphone-based iris recognition in the visible spectrum (VIS) remains difficult due to illumination variability, pigmentation differences, and the absence of standardized capture controls. This work presents a compact end-to-end pipeline that enforces ISO/IEC 29794-6 quality compliance at acquisition and demonstrates that accurate VIS iris recognition is feasible on commodity devices. Using a custom Android application performing real-time framing, sharpness evaluation, and feedback, we introduce the CUVIRIS dataset of 752 compliant images from 47 subjects. A lightweight MobileNetV3-based multi-task segmentation network (LightIrisNet) is developed for efficient on-device processing, and a transformer matcher (IrisFormer) is adapted to the VIS domain. Under a standardized protocol and comparative benchmarking against prior CNN baselines, OSIRIS attains a TAR of 97.9% at FAR=0.01 (EER=0.76%), while IrisFormer, trained only on UBIRIS.v2, achieves an EER of 0.057% on CUVIRIS. The acquisition app, trained models, and a public subset of the dataset are released to support reproducibility. These results confirm that standardized capture and VIS-adapted lightweight models enable accurate and practical iris recognition on smartphones.
翻译:基于智能手机的可见光谱(VIS)虹膜识别由于光照变化、色素差异以及缺乏标准化采集控制而仍然具有挑战性。本研究提出了一种紧凑的端到端流程,在采集阶段强制符合ISO/IEC 29794-6质量标准,并证明在商用设备上实现准确的可见光谱虹膜识别是可行的。通过使用一个执行实时取景、清晰度评估和反馈的自定义Android应用程序,我们引入了CUVIRIS数据集,包含来自47名受试者的752张合规图像。开发了一种基于轻量级MobileNetV3的多任务分割网络(LightIrisNet)以进行高效的设备端处理,并适配了一个适用于可见光谱领域的Transformer匹配器(IrisFormer)。在标准化协议下并与先前的CNN基线进行对比基准测试中,OSIRIS在FAR=0.01时实现了97.9%的TAR(EER=0.76%),而仅在UBIRIS.v2上训练的IrisFormer在CUVIRIS上实现了0.057%的EER。采集应用程序、训练模型和数据集的一个公共子集已发布以支持可重复性。这些结果证实,标准化采集和适用于可见光谱的轻量级模型能够在智能手机上实现准确且实用的虹膜识别。