Low-light image enhancement is vital for improving the visibility and quality of images captured under suboptimal lighting conditions. Traditional methods often fail to adequately capture local lighting variations and enhance both textural and chromatic details. Recent deep learning-based approaches, while effective, still struggle with generalization across diverse datasets, leading to noise amplification and unnatural color saturation. To address these challenges, the Adaptive Light Enhancement Network (ALEN) is introduced, a novel method that utilizes a classification mechanism to determine whether local or global illumination enhancement is required. ALEN integrates the Swin Light-Classification Transformer (SLCformer) for illuminance categorization, complemented by the Single-Channel Network (SCNet) and Multi-Channel Network (MCNet) for precise illumination and color estimation, respectively. Extensive experiments on publicly available datasets demonstrate ALEN's robust generalization capabilities, outperforming state-of-the-art methods in both quantitative metrics and qualitative assessments. Furthermore, ALEN not only enhances image quality but also improves the performance of high-level vision tasks such as semantic segmentation, showcasing its broader applicability and potential impact. The code for this method and the datasets are available at https://github.com/xingyumex/ALEN}{https://github.com/xingyumex/ALEN
翻译:暂无翻译