Deep learning models face persistent challenges in training, particularly due to internal covariate shift and label shift. While single-mode normalization methods like Batch Normalization partially address these issues, they are constrained by batch size dependencies and limiting distributional assumptions. Multi-mode normalization techniques mitigate these limitations but struggle with computational demands when handling diverse Gaussian distributions. In this paper, we introduce a new approach to multi-mode normalization that leverages prior knowledge to improve neural network representations. Our method organizes data into predefined structures, or "contexts", prior to training and normalizes based on these contexts, with two variants: Context Normalization (CN) and Context Normalization - Extended (CN-X). When contexts are unavailable, we introduce Adaptive Context Normalization (ACN), which dynamically builds contexts in the latent space during training. Across tasks in image classification, domain adaptation, and image generation, our methods demonstrate superior convergence and performance.
翻译:暂无翻译