Academic Journal of Computing & Information Science, 2025, 8(9); doi: 10.25236/AJCIS.2025.080907.
Qiang Qu, Yingbo Wang
School of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi'an, China, 710021
Haze in the atmosphere significantly degrades the quality of images captured by sensors by obscuring key visual features, thereby impairing the performance of downstream computer vision tasks. In real-word scenarios, haze often exhibits complex multi-scale characteristics and spatial non-uniformity, posing significant challenges for image dehazing due to severe detail loss and contrast reduction in heavily hazy regions. To address these issues, this paper proposes a novel dehazing method tailored for non-uniform haze removal, based on a haze density-aware U-shaped network architecture. The proposed framework comprises two core components: a haze density perception module and a U-shaped encoder-decoder network built upon NMF (Nonlinear Activation Mamba Fog) blocks. The haze density perception module employs a lightweight convolutional neural network (CNN) to estimate pixel-level haze density maps and adaptively adjusts input weights based on haze density to accommodate scenes with varying haze intensities, thereby effectively enhancing the ability of network's contextual perception under different haze regions. The NMF block replaces conventional nonlinear activation functions with a learnable gating mechanism, facilitating implicit nonlinear transformation while mitigating gradient saturation during training. Moreover, the NMF module adopts a dual-branch structure that integrates: (1) a spatial-channel attention (SCA) mechanism to emphasize informative feature across both spatial and channel dimensions, and (2) a Mamba-based state-space model for range spatial dependency modeling, which captures the global distribution patterns of haze more effectively than local convolutions. Experimental results demonstrate that the proposed method achieves highly competitive performance on non-uniform haze image datasets, effectively improving detail restoration and color fidelity in image dehazing.
Image Dehazing, Fog Density Estimation, Non-Uniform Haze, Mamba
Qiang Qu, Yingbo Wang. DAU-Net: Density-Aware U-Shaped Network for Non-Uniform Haze Removal. Academic Journal of Computing & Information Science (2025), Vol. 8, Issue 9: 46-53. https://doi.org/10.25236/AJCIS.2025.080907.
[1] Ancuti C O, Ancuti C, Timofte R. NH-HAZE: An image dehazing benchmark with non-homogeneous hazy and haze-free images[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. 2020: 444-445.
[2] Cai B, Xu X, Jia K, et al. Dehazenet: An end-to-end system for single image haze removal[J]. IEEE transactions on image processing, 2016, 25(11): 5187-5198.
[3] Ren W, Ma L, Zhang J, et al. Gated fusion network for single image dehazing[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 3253-3261.
[4] Chen H, Wang Y, Guo T, et al. Pre-trained image processing transformer[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 12299-12310.
[5] Sakaridis C, Dai D, Van Gool L. ACDC: The adverse conditions dataset with correspondences for semantic driving scene understanding[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 10765-10775.
[6] He K, Sun J, Tang X. Single image haze removal using dark channel prior[J]. IEEE transactions on pattern analysis and machine intelligence, 2010, 33(12): 2341-2353.
[7] Berman D, Avidan S. Non-local image dehazing[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 1674-1682.
[8] Zhang H, Patel V M. Densely connected pyramid dehazing network[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 3194-3203.
[9] Qin X, Wang Z, Bai Y, et al. FFA-Net: Feature fusion attention network for single image dehazing[C]//Proceedings of the AAAI conference on artificial intelligence. 2020, 34(07): 11908-11915.
[10] Liu X, Ma Y, Shi Z, et al. Griddehazenet: Attention-based multi-scale network for image dehazing[C] //Proceedings of the IEEE/CVF international conference on computer vision. 2019: 7314-7323.
[11] Song Y, He Z, Qian H, et al. Vision transformers for single image dehazing[J]. IEEE Transactions on Image Processing, 2023, 32: 1927-1941.
[12] Wang Z, Cun X, Bao J, et al. Uformer: A general u-shaped transformer for image restoration[C] //Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 17683-17693.
[13] Gu A, Dao T. Mamba: Linear-time sequence modeling with selective state spaces[J]. arXiv preprint arXiv:2312.00752, 2023.
[14] Zou Z, Yu H, Huang J, et al. Freqmamba: Viewing mamba from a frequency perspective for image deraining[C]//Proceedings of the 32nd ACM international conference on multimedia. 2024: 1905-1914.
[15] Guo H, Li J, Dai T, et al. Mambair: A simple baseline for image restoration with state-space model[C]//European conference on computer vision. Cham: Springer Nature Switzerland, 2024: 222-241.