基于照度与场景纹理注意力图的低光图像增强
Low-light image enhancement algorithm based on illumination and scene texture attention map
- 2024年29卷第4期 页码:862-874
纸质出版日期: 2024-04-16
DOI: 10.11834/jig.230271
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2024-04-16 ,
移动端阅览
赵明华, 汶怡春, 都双丽, 胡静, 石程, 李鹏. 2024. 基于照度与场景纹理注意力图的低光图像增强. 中国图象图形学报, 29(04):0862-0874
Zhao Minghua, Wen Yichun, Du Shuangli, Hu Jing, Shi Cheng, Li Peng. 2024. Low-light image enhancement algorithm based on illumination and scene texture attention map. Journal of Image and Graphics, 29(04):0862-0874
目的
2
现有的低照度图像增强算法常存在局部区域欠增强、过增强及色彩偏差等情况,且对于极低照度图像增强,伴随着噪声放大及细节信息丢失等问题。对此,提出了一种基于照度与场景纹理注意力图的低光图像增强算法。
方法
2
首先,为了降低色彩偏差对注意力图估计模块的影响,对低光照图像进行了色彩均衡处理;其次,试图利用低照度图像最小通道约束图对正常曝光图像的照度和纹理进行注意力图估计,为后续增强模块提供信息引导;然后,设计全局与局部相结合的增强模块,用获取的照度和场景纹理注意力估计图引导图像亮度提升和噪声抑制,并将得到的全局增强结果划分成图像块进行局部优化,提升增强性能,有效避免了局部欠增强和过增强的问题。
结果
2
将本文算法与2种传统方法和4种深度学习算法比较,主观视觉和客观指标均表明本文增强结果在亮度、对比度以及噪声抑制等方面取得了优异的性能。在VV(Vasileios Vonikakis)数据集上,本文方法的BTMQI(blind tone-mapped quality index)和NIQMC(no-reference image quality metric for contrast distortion)指标均达到最优值;在178幅普通低照度图像上本文算法的BTMQI和NIQMC均取得次优值,但纹理突出和噪声抑制优势显著。
结论
2
大量定性及定量的实验结果表明,本文方法能有效提升图像亮度和对比度,且在突出暗区纹理时,能有效抑制噪声。本文方法用于极低照度图像时,在色彩还原、细节纹理恢复和噪声抑制方面均具有明显优势。代码已共享在Github上:
https://github.com/shuanglidu/LLIE_CEIST.git
https://github.com/shuanglidu/LLIE_CEIST.git
。
Objective
2
Owing to the lack of sufficient environmental light, images captured from low-light scenes often suffer from several kinds of degradations, such as low visibility, low contrast, intensive noise, and color distortion. Such degradations will not only lower the visual perception quality of the images but also reduce the performance of the subsequent middle- and high-level vision tasks, such as object detection and recognition, semantic segmentation, and automatic driving. Therefore, the images taken under low-light conditions should be enhanced to meet subsequent utilization. Low-light image enhancement is one of the most important low-level vision tasks, which aims at improving the illumination and recovering image details of dark regions with lighting noise and has been intensively studied. Many impressive traditional methods and deep learning-based methods have been proposed. The methods achieved by traditional image processing techniques mainly include value mapping (such as histogram equalization and gamma correction) and model-based methods (such as Retinex model and atmospheric scattering model). However, they only improve image quality from a single perspective, such as contrast or dynamic range, and neglect such degradations as noise and scene detail recovery. On the contrary, with the great development of deep neural networks in low-level computer vision, deep learning-based methods can simultaneously optimize the enhancement results from multiple perspectives, such as brightness, color, and contrast. Thus, the enhancement performance is significantly improved. Although significant progress has been achieved, the existing deep learning-based enhancement methods have drawbacks, such as underenhancement, overenhancement, and color distortion in local areas, and the enhanced results are inconsistent with the visual characteristics of human eyes. In addition, given the high distortion degree of extremely low-light images, recovering scene details and suppressing noise amplification during enhancement are usually difficult. Therefore, increased attention should be paid to low-light image enhancement methods. To this end, a low-light image enhancement algorithm based on illumination and scene texture attention map is proposed in this paper.
Method
2
First, unlike in normal-light images, the illumination intensity of RGB channels is obviously different in low-light images, leading to apparent color distortion. Color equalization processing is performed for low-light images to reduce the influence of color distortion on the estimation module of attention map. We implement color equalization using the illumination intensity of RGB channels estimated with the dark channel prior to make the light intensity of each channel similar. Second, considering that the minimum channel constraint map has the characteristics of noise suppression and texture prominence, we estimate the illumination and texture attention map of normal-exposure images on the basis of the minimum channel constraint map of low-light images and provide information guidance for the subsequent enhancement module. Thus, an attention map estimation module based on the U-Net architecture is proposed. Third, an enhancement module is developed to improve image quality from the perspectives of the whole image and local patches. In the global enhancement module, the estimated illumination and scene texture attention map is used to guide the illumination adjustment and noise suppression. The attention mechanism can enable the network to allocate different attention to various brightness areas in low-light images during the training process to help the network focus on useful information effectively. The global enhanced result is divided into small patches to deal with the problems of underenhancement and overenhancement in local areas to improve the results further.
Result
2
To verify the effectiveness of the proposed method, we compare it with six state-of-the-art enhancement methods, including two traditional methods: semi-decomposed decomposition (SDD) and plug-and-play Retinex model (PnPR), and four deep learning-based methods: EnlightenGAN, zero-reference deep curve estimation (Zero-DCE), Retinex-based deep unfolding network (URetinex-Net), and signal-to-noise-ratio aware low light image enhancement (SNR-aware). We use digital images from commercial cameras (DICM), low-light image enhancement (LIME), multi-exposure image fusion (MEF), and 9 other datasets to construct 178 low-light images for testing. These low-light images do not have normal-exposure image for reference. Quantitative and qualitative evaluations are performed. For the quantitative evaluation, natural image quality evaluator (NIQE), blind tone-mapped quality index (BTMQI), and no-reference image quality metric for contrast distortion (NIQMC) are used to assess image quality. NIQE examines the image with the designed natural image model. BTMQI evaluates image perception quality after tone mapping by analyzing the naturalness and structure. For NIQE and BTMQI, the lower the value is, the higher the natural quality of the image is. NIQMC evaluates image quality by calculating the contrast between the local properties and the related properties of the blocks in the image. The higher the score is, the better the image quality is. On the VV dataset, which is a challenging dataset, our method obtains the best results for the BTMQI and NIQMC indicators. Experiments on the 178 low-light images show that our method achieves suboptimal values for the BTMQI and NIQMC metrics, but the advantages of texture prominence and noise suppression are significant.
Conclusion
2
Experimental results indicate that the enhanced results by our method achieve expected visual effects in terms of brightness, contrast, and noise suppression. In addition, our method can realize expected enhancement results for extremely low-light images.
低照度图像增强注意力机制U-Net网络照度估计最小通道约束图
low-light image enhancementattention mechanismU-Net networkillumination estimationthe minimum channel constraint map
Abdullah-Al-Wadud M, Kabir M H, Dewan M A A and Chae O. 2007. A dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics, 53(2): 593-600 [DOI: 10.1109/TCE.2007.381734http://dx.doi.org/10.1109/TCE.2007.381734]
Chen C, Chen Q F, Xu J and Koltun V. 2018. Learning to see in the dark//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 3291-3300 [DOI: 10.1109/CVPR.2018.00347http://dx.doi.org/10.1109/CVPR.2018.00347]
Du S L, Dang H, Zhao M H and Shi Z H. 2023. Low-light image enhancement and denoising with internal and external priors. Journal of Image and Graphics, 28(9): 2844-2855
都双丽, 党慧, 赵明华, 石争浩. 2023. 结合内外先验知识的低照度图像增强与去噪算法. 中国图象图形学报, 28(9): 2844-2855 [DOI: 10.11834/jig.220707http://dx.doi.org/10.11834/jig.220707]
Fu X Y, Zeng D L, Huang Y, Zhang X P and Ding X H. 2016. A weighted variational model for simultaneous reflectance and illumination estimation//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE: 2782-2790 [DOI: 10.1109/CVPR.2016.304http://dx.doi.org/10.1109/CVPR.2016.304]
Gu K, Lin W S, Zhai G T, Yang X K, Zhang W J and Chen C W. 2017. No-reference quality metric of contrast-distorted images based on information maximization. IEEE Transactions on Cybernetics, 47(12): 4559-4565 [DOI: 10.1109/TCYB.2016.2575544http://dx.doi.org/10.1109/TCYB.2016.2575544]
Gu K, Wang S Q, Zhai G T, Ma S W, Yang X K, Lin W S, Zhang W J and Gao W. 2016. Blind quality assessment of tone-mapped images via analysis of information, naturalness, and structure. IEEE Transactions on Multimedia, 18(3): 432-443 [DOI: 10.1109/TMM.2016.2518868http://dx.doi.org/10.1109/TMM.2016.2518868]
Guo X J. 2016. LIME: a method for low-light image enhancement//Proceedings of the 24th ACM International Conference on Multimedia. Amsterdam, the Netherlands: ACM: 87-91 [DOI: 10.1145/2964284.2967188http://dx.doi.org/10.1145/2964284.2967188]
Guo Y K, Zhu Y C, Liu L P and Huang Q. 2022. Research review of space-frequency domain image enhancement methods. Computer Engineering and Applications, 58(11): 23-32
郭永坤, 朱彦陈, 刘莉萍, 黄强. 2022. 空频域图像增强方法研究综述. 计算机工程与应用, 58(11): 23-32 [DOI: 10.3778/j.issn.1002-8331.2112-0280http://dx.doi.org/10.3778/j.issn.1002-8331.2112-0280]
Hao S J, Han X, Guo Y R, Xu X and Wang M. 2020. Low-light image enhancement with semi-decoupled decomposition. IEEE Transactions on Multimedia, 22(12): 3025-3038 [DOI: 10.1109/TMM.2020.2969790http://dx.doi.org/10.1109/TMM.2020.2969790]
Jiang Y F, Gong X Y, Liu D, Cheng Y, Fang C, Shen X H, Yang J C, Zhou P and Wang Z Y. 2021. EnlightenGAN: deep light enhancement without paired supervision. IEEE Transactions on Image Processing, 30: 2340-2349 [DOI: 10.1109/TIP.2021.3051462http://dx.doi.org/10.1109/TIP.2021.3051462]
Jiang Z T, Qin L L, Qin J Q and Zhang S Q. 2021. Low-light image enhancement method based on MDARNet. Journal of Software, 32(12): 3977-3991
江泽涛, 覃露露, 秦嘉奇, 张少钦. 2021. 一种基于MDARNet的低照度图像增强方法. 软件学报, 32(12): 3977-3991 [DOI: 10.13328/j.cnki.jos.006112http://dx.doi.org/10.13328/j.cnki.jos.006112]
Lee C, Lee C and Kim C S. 2012. Contrast enhancement based on layered difference representation//Proceedings of the 19th IEEE International Conference on Image Processing. Orlando, USA: IEEE: 965-968 [DOI: 10.1109/ICIP.2012.6467022http://dx.doi.org/10.1109/ICIP.2012.6467022]
Li C Y, Guo C L, Han L H, Jiang J, Cheng M M, Gu J W and Loy C C. 2022a. Low-light image and video enhancement using deep learning: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(12): 9396-9416 [DOI: 10.1109/TPAMI.2021.3126387http://dx.doi.org/10.1109/TPAMI.2021.3126387]
Li C Y, Guo C L and Loy C C. 2022b. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(8): 4225-4238 [DOI: 10.1109/TPAMI.2021.3063604http://dx.doi.org/10.1109/TPAMI.2021.3063604]
Lim S and Kim W. 2021. DSLR: deep stacked Laplacian restorer for low-light image enhancement. IEEE Transactions on Multimedia, 23: 4272-4284 [DOI: 10.1109/TMM.2020.3039361http://dx.doi.org/10.1109/TMM.2020.3039361]
Lin Y H and Lu Y C. 2022. Low-light enhancement using a plug-and-play Retinex model with shrinkage mapping for illumination estimation. IEEE Transactions on Image Processing, 31: 4897-4908 [DOI: 10.1109/TIP.2022.3189805http://dx.doi.org/10.1109/TIP.2022.3189805]
Lore K G, Akintayo A and Sarkar S. 2017. LLNet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognition, 61: 650-662 [DOI: 10.1016/j.patcog.2016.06.008http://dx.doi.org/10.1016/j.patcog.2016.06.008]
Lyu F F, Li Y and Lu F. 2021. Attention guided low-light image enhancement with a large scale low-light simulation dataset. International Journal of Computer Vision, 129(7): 2175-2193 [DOI: 10.1007/s11263-021-01466-8http://dx.doi.org/10.1007/s11263-021-01466-8]
Lyu F F, Liu B and Lu F. 2020. Fast enhancement for non-uniform illumination images using light-weight CNNs//Proceedings of the 28th ACM International Conference on Multimedia. Seattle, USA: ACM: 1450-1458 [DOI: 10.1145/3394171.3413925http://dx.doi.org/10.1145/3394171.3413925]
Ma K D, Zeng K and Wang Z. 2015. Perceptual quality assessment for multi-exposure image fusion. IEEE Transactions on Image Processing, 24(11): 3345-3356 [DOI: 10.1109/TIP.2015.2442920http://dx.doi.org/10.1109/TIP.2015.2442920]
Ma L, Ma T Y and Liu R S. 2022. The review of low-light image enhancement. Journal of Image and Graphics, 27(5): 1392-1409
马龙, 马腾宇, 刘日升. 2022. 低光照图像增强算法综述. 中国图象图形学报, 27(5): 1392-1409 [DOI: 10.11834/jig.210852http://dx.doi.org/10.11834/jig.210852]
Mittal A, Soundararajan R and Bovik A C. 2013. Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters, 20(3): 209-212 [DOI: 10.1109/LSP.2012.2227726http://dx.doi.org/10.1109/LSP.2012.2227726]
Peli E. 1990. Contrast in complex images. Journal of the Optical Society of America A, 7(10): 2032-2040 [DOI: 10.1364/JOSAA.7.002032http://dx.doi.org/10.1364/JOSAA.7.002032]
Ren W Q, Liu S F, Ma L, Xu Q Q, Xu X Y, Cao X C, Du J P and Yang M H. 2019. Low-light image enhancement via a deep hybrid network. IEEE Transactions on Image Processing, 28(9): 4364-4375 [DOI: 10.1109/TIP.2019.2910412http://dx.doi.org/10.1109/TIP.2019.2910412]
Vonikakis V. 2017. Busting image enhancement and tone-mapping algorithms: a collection of the most challenging cases [DB/OL]. [2023-03-10]. https://sites.google.com/site/vonikakis/datasetshttps://sites.google.com/site/vonikakis/datasets
Wang Q T, Chen W H, Wu X M and Li Z G. 2020. Detail-enhanced multi-scale exposure fusion in YUV color space. IEEE Transactions on Circuits and Systems for Video Technology, 30(8): 2418-2429 [DOI: 10.1109/TCSVT.2019.2919310http://dx.doi.org/10.1109/TCSVT.2019.2919310]
Wang R X, Zhang Q, Fu C W, Shen X Y, Zheng W S and Jia J Y. 2019a. Underexposed photo enhancement using deep illumination estimation//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 6842-6850 [DOI: 10.1109/CVPR.2019.00701http://dx.doi.org/10.1109/CVPR.2019.00701]
Wang S H, Zheng J, Hu H M and Li B. 2013. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Transactions on Image Processing, 22(9): 3538-3548 [DOI: 10.1109/TIP.2013.2261309http://dx.doi.org/10.1109/TIP.2013.2261309]
Wang Y, Cao Y, Zha Z J, Zhang Z J, Zhang J, Xiong Z W, Zhang W and Wu F. 2019b. Progressive retinex: mutually reinforced illumination-noise perception network for low-light image enhancement//Proceedings of the 27th ACM International Conference on Multimedia. Nice, France: ACM: 2015-2023 [DOI: 10.1145/3343031.3350983http://dx.doi.org/10.1145/3343031.3350983]
Wang Y F, Liu H M and Fu Z W. 2019c. Low-light image enhancement via the absorption light scattering model. IEEE Transactions on Image Processing, 28(11): 5679-5690 [DOI: 10.1109/TIP.2019.2922106http://dx.doi.org/10.1109/TIP.2019.2922106]
Wang Z, Simoncelli E P and Bovik A C. 2003. Multiscale structural similarity for image quality assessment//Proceedings of the 37th Asilomar Conference on Signals, Systems and Computers. Pacific Grove, USA: IEEE: 1398-1402 [DOI: 10.1109/ACESS.2003.1292216http://dx.doi.org/10.1109/ACESS.2003.1292216]
Wei C, Wang W J, Yang W H and Liu J Y. 2018. Deep retinex decomposition for low-light enhancement [EB/OL]. [2023-03-10]. https://arxiv.org/pdf/1808.04560.pdfhttps://arxiv.org/pdf/1808.04560.pdf
Wu W H, Weng J, Zhang P P, Wang X, Yang W H and Jiang J M. 2022. URetinex-Net: retinex-based deep unfolding network for low-light image enhancement//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE: 5891-5900 [DOI: 10.1109/CVPR52688.2022.00581http://dx.doi.org/10.1109/CVPR52688.2022.00581]
Xu J, Hou Y K, Ren D W, Liu L, Zhu F, Yu M Y, Wang H Q and Shao L. 2020a. STAR: a structure and texture aware retinex model. IEEE Transactions on Image Processing, 29: 5022-5037 [DOI: 10.1109/TIP.2020.2974060http://dx.doi.org/10.1109/TIP.2020.2974060]
Xu K, Yang X, Yin B C and Lau R W H. 2020b. Learning to restore low-light images via decomposition-and-enhancement//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 2278-2287 [DOI: 10.1109/CVPR42600.2020.00235http://dx.doi.org/10.1109/CVPR42600.2020.00235]
Xu X G, Wang R X, Fu C W and Jia J Y. 2022. SNR-aware low-light image enhancement//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE: 17693-17703 [DOI: 10.1109/CVPR52688.2022.01719http://dx.doi.org/10.1109/CVPR52688.2022.01719]
Yang W H, Wang W J, Huang H F, Wang S Q and Liu J Y. 2021. Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Transactions on Image Processing, 30: 2072-2086 [DOI: 10.1109/TIP.2021.3050850http://dx.doi.org/10.1109/TIP.2021.3050850]
Ying Z Q, Li G, Ren Y R, Wang R G and Wang W M. 2017. A new image contrast enhancement algorithm using exposure fusion framework//Proceedings of the 17th International Conference on Computer Analysis of Images and Patterns. Ystad, Sweden: Springer: 36-46 [DOI: 10.1007/978-3-319-64698-5_4http://dx.doi.org/10.1007/978-3-319-64698-5_4]
Yuan L and Sun J. 2012. Automatic exposure correction of consumer photographs//Proceedings of the 12th European Conference on Computer Vision. Florence, Italy: Springer: 771-785 [DOI: 10.1007/978-3-642-33765-9_55http://dx.doi.org/10.1007/978-3-642-33765-9_55]
Zhao M H, Cheng D N, Du S L, Hu J, Shi C and Shi Z H. 2022. An improved fusion strategy based on transparency-guided backlit image enhancement. Journal of Image and Graphics, 27(5): 1554-1564
赵明华, 程丹妮, 都双丽, 胡静, 石程, 石争浩. 2022. 改进融合策略下透明度引导的逆光图像增强. 中国图象图形学报, 27(5): 1554-1564 [DOI: 10.11834/jig.210739http://dx.doi.org/10.11834/jig.210739]
Zhao Z J, Xiong B S, Wang L, Qu Q F, Yu L and Kuang F. 2022. RetinexDIP: a unified deep framework for low-light image enhancement. IEEE Transactions on Circuits and Systems for Video Technology, 32(3): 1076-1088 [DOI:10.1109/TCSVT.2021.3073371http://dx.doi.org/10.1109/TCSVT.2021.3073371]
相关作者
相关机构