Transformer-based models, such as the Vision Transformer (ViT), can outperform onvolutional Neural Networks (CNNs) in some vision tasks when there is sufficient training data. However, (CNNs) have a strong and useful inductive bias for vision tasks (i.e. translation equivariance and locality). In this work, we developed a novel model architecture that we call a Mobile fish landmark detection network (MFLD-net). We have made this model using convolution operations based on ViT (i.e. Patch embeddings, Multi-Layer Perceptrons). MFLD-net can achieve competitive or better results in low data regimes while being lightweight and therefore suitable for embedded and mobile devices. Furthermore, we show that MFLD-net can achieve keypoint (landmark) estimation accuracies on-par or even better than some of the state-of-the-art (CNNs) on a fish image dataset. Additionally, unlike ViT, MFLD-net does not need a pre-trained model and can generalise well when trained on a small dataset. We provide quantitative and qualitative results that demonstrate the model's generalisation capabilities. This work will provide a foundation for future efforts in developing mobile, but efficient fish monitoring systems and devices.
翻译:以变异器为基础的模型,如愿景变异器(VIT),在某些有充足培训数据的情况下,可以在某些愿景任务中优于进化神经网络(CNNs),然而,(CNNs)对于愿景任务(即翻译等宽度和地点)具有强烈和有用的感化偏差。在这项工作中,我们开发了一个新型模型结构,我们称之为移动鱼类标志探测网(MFLD-net),我们利用VIT(即补丁嵌入、多视界)的演动操作制作了这一模型。MFLD网可以在低数据系统中取得竞争或更好的结果,同时具有轻量级,因此适合嵌入和移动设备。此外,我们显示MFLD-net能够达到关键点(地标记),或者甚至比一些在鱼类图像数据集上的国家技术(MFLD-Ns)的缩动标度(Ns)。此外,我们与VT不同的是,MFLD-net不需要事先培训的模型,在培训小型数据系统时,可以很好地概括地展示这一小移动基础的工作。我们提供定量和定性结果。