Deep learning grapples with challenges in training neural networks, notably internal covariate shift and label shift. Conventional normalization techniques like Batch Normalization (BN) partially mitigate these issues but are hindered by constraints such as dependency on batch size and distribution assumptions. Similarly, mixture normalization (MN) encounters computational barriers in handling diverse Gaussian distributions. This paper introduces Cluster-based Normalization (CB-Norm), presenting two variants: Supervised Cluster-based Normalization (SCB-Norm) and Unsupervised Cluster-based Normalization (UCB-Norm), offering a pioneering single-step normalization strategy. CB-Norm employs a Gaussian mixture model to address gradient stability and learning acceleration challenges. SCB-Norm utilizes predefined data partitioning, termed clusters, for supervised normalization, while UCB-Norm adaptively clusters neuron activations during training, eliminating reliance on predefined partitions. This approach simultaneously tackles clustering and resolution tasks within neural networks, reducing computational complexity compared to existing methods. CB-Norm outperforms traditional techniques like BN and MN, enhancing neural network performance across diverse learning scenarios.
翻译:暂无翻译