Density map estimation can be used to estimate object counts in dense and occluded scenes where discrete counting-by-detection methods fail. We propose a multicategory counting framework that leverages a Twins pyramid vision-transformer backbone and a specialised multi-class counting head built on a state-of-the-art multiscale decoding approach. A two-task design adds a segmentation-based Category Focus Module, suppressing inter-category cross-talk at training time. Training and evaluation on the VisDrone and iSAID benchmarks demonstrates superior performance versus prior multicategory crowd-counting approaches (33%, 43% and 64% reduction to MAE), and the comparison with YOLOv11 underscores the necessity of crowd counting methods in dense scenes. The method's regional loss opens up multi-class crowd counting to new domains, demonstrated through the application to a biodiversity monitoring dataset, highlighting its capacity to inform conservation efforts and enable scalable ecological insights.
翻译:密度图估计可用于在密集和遮挡场景中估计物体数量,这些场景中基于检测的离散计数方法往往失效。我们提出了一种多类别计数框架,该框架采用Twins金字塔视觉Transformer主干网络,并基于最先进的多尺度解码方法构建了专用的多类别计数头。通过双任务设计引入基于分割的类别聚焦模块,在训练时抑制类别间串扰。在VisDrone和iSAID基准数据集上的训练与评估表明,本方法相较于现有多类别人群计数方法具有更优性能(MAE分别降低33%、43%和64%),与YOLOv11的对比则凸显了密集场景中人群计数方法的必要性。该方法通过区域损失函数将多类别人群计数拓展至新领域,在生物多样性监测数据集上的应用验证了其支持保护决策与实现可扩展生态洞察的潜力。