This work addresses the problem of unbalanced expert utilization in sparsely-gated Mixture of Expert (MoE) layers, embedded directly into convolutional neural networks. To enable a stable training process, we present both soft and hard constraint-based approaches. With hard constraints, the weights of certain experts are allowed to become zero, while soft constraints balance the contribution of experts with an additional auxiliary loss. As a result, soft constraints handle expert utilization better and support the expert specialization process, hard constraints mostly maintain generalized experts and increase the model performance for many applications. Our findings demonstrate that even with a single dataset and end-to-end training, experts can implicitly focus on individual sub-domains of the input space. Experts in the proposed models with MoE embeddings implicitly focus on distinct domains, even without suitable predefined datasets. As an example, experts trained for CIFAR-100 image classification specialize in recognizing different domains such as sea animals or flowers without previous data clustering. Experiments with RetinaNet and the COCO dataset further indicate that object detection experts can also specialize in detecting objects of distinct sizes.
翻译:这项工作解决了专家在精密的专家混合(MoE)层中利用不均的问题,这种专家直接嵌入进化神经网络。为了实现稳定的培训过程,我们提出了软和硬的制约性方法。在困难的限制下,允许某些专家的权重变为零,而软的制约平衡了专家的贡献和额外的附带损失。结果,软的制约更好地处理专家的利用,支持专家专业化进程,困难的制约多半维持了专家的普遍化,提高了许多应用的模型性能。我们的研究结果表明,即使有了单一的数据集和端到端的培训,专家也可以隐含地侧重于输入空间的个别次域。拟议的模型的专家与MOE嵌入不同领域,即使没有适当的预设数据集,也暗含着对不同领域的关注。举例说,受过CIFAR-100图像分类培训的专家专门识别不同领域,如海洋动物或花类,而没有以前的数据组合。与RetinNet和CO数据集的实验进一步表明,物体探测专家也可以专门探测不同大小的物体。