无参考样本下的食管内镜图像增强技术
Esophageal endoscopic image enhancement method without reference samples
- 2024年29卷第11期 页码:3487-3500
纸质出版日期: 2024-11-16
DOI: 10.11834/jig.230865
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2024-11-16 ,
移动端阅览
姚韩敏, 周颖玥, 郭俊菲, 秦佳敏, 李小霞, 董舒琦. 2024. 无参考样本下的食管内镜图像增强技术. 中国图象图形学报, 29(11):3487-3500
Yao Hanmin, Zhou Yingyue, Guo Junfei, Qin Jiamin, Li Xiaoxia, Dong Shuqi. 2024. Esophageal endoscopic image enhancement method without reference samples. Journal of Image and Graphics, 29(11):3487-3500
目的
2
在食管病变的筛查中,卢戈染色内镜(Lugol’s chromo endoscopy,LCE)因其良好的病变可视性、诊断准确性以及低廉的检查成本在消化内科检查中独具优势。然而,在采集LCE食管内镜图像时,由于内窥镜内置光源的限制,光照的方向和角度有限,导致图像出现光照不均匀、对比度低等问题。
方法
2
针对这一问题,本文在RetinexDIP算法基础上,提出了用于生成图像分量的生成器网络(stable generating network,SgNet)。该网络采用编码—解码结构,通过本文提出的通道调整模块(channel attention adjustment module, CAAM)使得上下采样过程中对应的特征通道权重保持一致,以增强网络稳定性,进而提升生成图像的质量。同时提出了一种新的颜色模型——“固定比例、独立亮度”模型(fixed proportion light,FPL),该模型将图像的亮度信息和颜色比例信息独立表示出来,图像的光照增强过程只在亮度通道上进行调整,从而保证LCE食管内镜图像的整体色彩信息不紊乱。
结果
2
在自建的LCE低光图像数据集上测试本文算法的有效性,与多种主流低光图像增强算法进行视觉效果和客观指标评价比较。结果显示本文所提算法在颜色保真、对比度提升以及降低噪声干扰等方面具有优势,在自然图像质量评估器(natural image quality evaluator,NIQE)和盲/无参考空间图像质量评估器(blind/referenceless image spatial quality evaluator,BRISQUE)指标上均表现出色。
结论
2
综合来看,本文算法在增强LCE食管内镜图像亮度的同时,有效地保持了图像的色彩和纹理细节信息,可以帮助医生更清晰地观察病灶组织结构和细节,提升诊断准确率,并为后续病灶智能检测提供了优质的图像数据。
Objective
2
Esophageal cancer is one of the most common malignant tumors that seriously threaten human health. At present, endoscopy combined with histopathological biopsy is the “gold standard” for diagnosing early esophageal cancer. Among them, Lugol’s chromo endoscopy (LCE) has a unique advantage in gastroenterology because of its good lesion visibility, diagnostic accuracy, and low cost. However, with the rising number of patients, the imbalance between the number of doctors and patients is becoming increasingly serious. The manual diagnosis process based on endoscopic images is susceptible to several factors, such as the experience and mental state of the doctor, the limitation of diagnosis time, the enormous image base, and the complex and variable appearance of the lesion. Therefore, the clinical diagnosis of artificial esophageal lesions still has a high rate of missed diagnoses and misdiagnosis. In recent years, the application of artificial intelligence (AI) in the field of medical imaging has provided strong support for doctors, and the AI-assisted diagnosis system based on deep learning can assist doctors to accurately diagnose the location and type of lesions, reducing their burden. However, deep learning models need sufficient and high-quality data. For esophageal endoscopic images, LCE esophageal endoscopic images will inevitably be affected by the built-in light source of the acquisition device during the acquisition process. The light distribution of LCE esophageal endoscopic images will be uneven due to the limited illumination direction and angle of the built-in light source, affecting the overall quality of the images, which is unfavorable to the subsequent training of the intelligent lesion detection model. The existing low-light image enhancement algorithms are not ideal for the enhancement of LCE esophageal endoscopic images due to the special nature of LCE esophageal endoscopic images, complex illumination, color sensitivity, and lack of high-quality reference (paired or unpaired) datasets.
Method
2
Based on the “generative” decomposition strategy of the RetinexDIP algorithm, instead of the Retinex model, this paper uses convolutional neural networks to generate image illumination and reflection components to decompose images and proposes a stable generating network (SgNet) to solve the aforementioned problem. The encoder-decoder structure is adopted in this network. The channel attention adjustment module proposed in this paper is used to adjust the feature graphs with the same number of channels in the encoder-decoder process to ensure that the corresponding feature channel weights remain consistent. This module aims to reduce the influence of irrelevant or redundant feature channels, minimize noise interference to the network, enhance the stability of the generated network, and improve the quality of the generated image. Simultaneously, a new color model, “fixed proportion light” (FPL), which independently represents the brightness and color proportion information of the image, is proposed, and the entire light enhancement process of the image is only adjusted on the brightness channel. Thus, the overall color information of LCE esophageal endoscopic images is not disordered.
Result
2
The effectiveness of the proposed algorithm is tested on the self-built LCE low-light image dataset, and the visual effect and objective index evaluation are compared with numerous mainstream low-light image enhancement algorithms. Two methods of quality assessment without reference images were used: the natural image quality evaluator (NIQE) and the blind/referenceless image spatial quality evaluator (BRISQUE). NIQE estimates image quality by measuring the deviation between the natural image and statistical law, which is more in line with human subjective quality evaluation. Meanwhile, the BRISQUE index can measure the degree of image distortion and estimate the quality score of the image from the brightness, contrast, sharpness, color saturation, and other factors. From the comparison results of visual effects, the proposed algorithm has advantages in color fidelity, contrast enhancement, and noise reduction. Meanwhile, from the comparison results of objective indicators, the proposed algorithm ranks first in the NIQE index and second only to the GCP algorithm in the BRISQUE index. Overall, the algorithm proposed in this paper has certain advantages in visual effect and objective index. In addition, tests on four publicly available low-light image datasets, including DICM, Fusion, LIME, and NPE, as well as the publicly available low-light endoscopic image dataset Endo4IE, show that the proposed algorithm has good performance on different datasets, especially for the complex low-light characteristics of endoscopic images.
Conclusion
2
The SgNet network proposed in this paper effectively utilizes the feature channel weight information in the encoder-decoder process to improve the quality of the generated image. The illumination and reflection components of the image can be effectively generated without the need for a low-light–normal-light image pair. The proposed FPL color model can effectively ensure that the overall color information of LCE esophageal endoscopic images is not disorganized during the enhancement process. According to the experimental results, the proposed algorithm not only enhances the brightness of LCE esophageal endoscopy images but also effectively maintains the color and texture details of the images, which can help doctors observe the lesion tissue structure and details, improve diagnostic accuracy, and provide high-quality image data for the subsequent intelligent detection of lesions.
图像增强卢戈染色内镜(LCE)Retinex模型图像生成颜色模型
image enhancementLugol’s chromo endoscopy(LCE)Retinex modelimage generationcolor model
Cai S L, Li B, Tan W M, Niu X J, Yu H H, Yao L Q, Zhou P H, Yan B and Zhong Y S. 2019. Using a deep learning system in endoscopy for screening of early esophageal squamous cell carcinoma (with video). Gastrointestinal Endoscopy, 90(5): 745-753.e2 [DOI: 10.1016/j.gie.2019.06.044http://dx.doi.org/10.1016/j.gie.2019.06.044]
Chen W Q, Zheng R S, Zhang S W, Zeng H M, Xia C F, Zuo T T, Yang Z X, Zou X N and He J. 2017. Cancer incidence and mortality in China, 2013. Cancer Letters, 401: 63-71 [DOI: 10.1016/j.canlet.2017.04.024http://dx.doi.org/10.1016/j.canlet.2017.04.024]
Du S L, Dang H, Zhao M H and Shi Z H. 2023. Low-light image enhancement and denoising with internal and external priors. Journal of Image and Graphics, 28(9): 2844-2855
都双丽, 党慧, 赵明华, 石争浩. 2023. 结合内外先验知识的低照度图像增强与去噪算法. 中国图象图形学报, 28(9): 2844-2855 [DOI: 10.11834/jig.220707http://dx.doi.org/10.11834/jig.220707]
Fan G D, Fan B, Gan M, Chen G Y and Chen C L P. 2022. Multiscale low-light image enhancement network with illumination constraint. IEEE Transactions on Circuits and Systems for Video Technology, 32(11): 7403-7417 [DOI: 10.1109/TCSVT.2022.3186880http://dx.doi.org/10.1109/TCSVT.2022.3186880]
Fu X Y, Zeng D L, Huang Y, Ding X H and Zhang X P. 2013. A variational framework for single low light image enhancement using bright channel prior//Proceedings of 2013 IEEE Global Conference on Signal and Information Processing. Austin, USA: IEEE: 1085-1088 [DOI: 10.1109/GlobalSIP.2013.6737082http://dx.doi.org/10.1109/GlobalSIP.2013.6737082]
Fu X Y, Zeng D L, Huang Y, Liao Y H, Ding X H and Paisley J. 2016a. A fusion based enhancing method for weakly illuminated images. Signal Processing, 129: 82-96 [DOI: 10.1016/j.sigpro.2016.05.031http://dx.doi.org/10.1016/j.sigpro.2016.05.031]
Fu X Y, Zeng D L, Huang Y, Zhang X P and Ding X H. 2016b. A weighted variational model for simultaneous reflectance and illumination estimation//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE: 2782-2790 [DOI: 10.1109/CVPR.2016.304http://dx.doi.org/10.1109/CVPR.2016.304]
García-Vega A, Espinosa R, Ochoa-Ruiz G, Bazin T, Falcón-Morales L, Lamarque D and Daul C. 2022. A novel hybrid endoscopic dataset for evaluating machine learning-based photometric image enhancement models//Proceedings of 2022 Advances in Computational Intelligence: 21st Mexican International Conference on Artificial Intelligence. Monterrey, Mexico: Springer: 267-281 [DOI: 10.1007/978-3-031-19493-1_22http://dx.doi.org/10.1007/978-3-031-19493-1_22]
Gharbi M, Chen J W, Barron J T, Hasinoff S W and Durand F. 2017. Deep bilateral learning for real-time image enhancement [EB/OL]. [2023-12-25]. https://arxiv.org/pdf/1707.02880.pdfhttps://arxiv.org/pdf/1707.02880.pdf
Guo X J, Li Y and Ling H B. 2017. LIME: low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing, 26(2): 982-993 [DOI: 10.1109/TIP.2016.2639450http://dx.doi.org/10.1109/TIP.2016.2639450]
Hao S J, Han X, Guo Y R, Xu X and Wang M. 2020. Low-light image enhancement with semi-decoupled decomposition. IEEE Transactions on Multimedia, 22(12): 3025-3038 [DOI: 10.1109/TMM.2020.2969790http://dx.doi.org/10.1109/TMM.2020.2969790]
Huang Y, Peng H, Li C S, Gao S M and Chen F. 2024. LLFlowGAN: a low-light image enhancement method for constraining invertible flow in a generative adversarial manner. Journal of Image and Graphics, 29(1): 65-79
黄颖, 彭慧, 李昌盛, 高胜美, 陈奉. 2024. LLFlowGAN: 以生成对抗方式约束可逆流的低照度图像增强. 中国图象图形学报, 29(1): 65-79 [DOI: 10.11834/jig.230063http://dx.doi.org/10.11834/jig.230063]
Ibrahim H and Pik Kong N S. 2007. Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics, 53(4): 1752-1758 [DOI: 10.1109/TCE.2007.4429280http://dx.doi.org/10.1109/TCE.2007.4429280]
Jeon J J, Park J Y and Eom I K. 2024. Low-light image enhancement using gamma correction prior in mixed color spaces. Pattern Recognition, 146: #110001 [DOI: 10.1016/j.patcog.2023.110001http://dx.doi.org/10.1016/j.patcog.2023.110001]
Jiang Y F, Gong X Y, Liu D, Cheng Y, Fang C, Shen X H, Yang J C, Zhou P and Wang Z Y. 2021. EnlightenGAN: deep light enhancement without paired supervision [EB/OL]. [2023-12-25]. https://arxiv.org/pdf/1906.06972.pdfhttps://arxiv.org/pdf/1906.06972.pdf
Jobson D J, Rahman Z and Woodell G A. 1997a. Properties and performance of a center/surround Retinex. IEEE Transactions on Image Processing, 6(3): 451-462 [DOI: 10.1109/83.557356http://dx.doi.org/10.1109/83.557356]
Jobson D J, Rahman Z and Woodell G A. 1997b. A multiscale Retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image Processing, 6(7): 965-976 [DOI: 10.1109/83.597272http://dx.doi.org/10.1109/83.597272]
Land E H. 1977. The Retinex theory of color vision. Scientific American, 237(6): 108-129 [DOI: 10.1038/scientificamerican1277-108http://dx.doi.org/10.1038/scientificamerican1277-108]
Lyu F F, Lu F, Wu J H and Lim C. 2018. MBLLEN: low-light image/video enhancement using CNNs//Proceedings of 2018 British Machine Vision Conference. Newcastle, UK: BMVA Press: 220#
Mittal A, Moorthy A K and Bovik A C. 2012. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing, 21(12): 4695-4708 [DOI: 10.1109/TIP.2012.2214050http://dx.doi.org/10.1109/TIP.2012.2214050]
Mittal A, Soundararajan R and Bovik A C. 2013. Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters, 20(3): 209-212 [DOI: 10.1109/LSP.2012.2227726http://dx.doi.org/10.1109/LSP.2012.2227726]
Park S, Yu S, Moon B, Ko S and Paik J. 2017. Low-light image enhancement using variational optimization-based Retinex model. IEEE Transactions on Consumer Electronics, 63(2): 178-184 [DOI: 10.1109/TCE.2017.014847http://dx.doi.org/10.1109/TCE.2017.014847]
Pizer S M, Amburn E P, Austin J D, Cromartie R, Geselowitz A, Greer T, Ter Haar Romeny B, Zimmerman J B and Zuiderveld K. 1987. Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing, 39(3): 355-368 [DOI: 10.1016/S0734-189X(87)80186-Xhttp://dx.doi.org/10.1016/S0734-189X(87)80186-X]
Pizer S M, Johnston R E, Ericksen J P, Yankaskas B C and Muller K E. 1990. Contrast-limited adaptive histogram equalization: speed and effectiveness//Proceedings of the 1st Conference on Visualization in Biomedical Computing. Atlanta, USA: IEEE: 337-345 [DOI: 10.1109/VBC.1990.109340http://dx.doi.org/10.1109/VBC.1990.109340]
Qu Y Y, Liu C and Ou Y S. 2020. LEUGAN: low-light image enhancement by unsupervised generative attentional networks[EB/OL]. [2023-10-21]. https://arxiv.org/pdf/2012.13322.pdfhttps://arxiv.org/pdf/2012.13322.pdf
Rahman Z, Jobson D J and Woodell G A. 1996. Multi-scale Retinex for color image enhancement//Proceedings of the 3rd IEEE International Conference on Image Processing, Lausanne, Switzerland: IEEE: 1003-1006 [DOI: 10.1109/ICIP.1996.560995http://dx.doi.org/10.1109/ICIP.1996.560995]
Ronneberger O, Fischer P and Brox T. 2015. U-net: convolutional networks for biomedical image segmentation [EB/OL]. [2023-12-25]. https://arxiv.org/pdf/1505.04597.pdfhttps://arxiv.org/pdf/1505.04597.pdf
Sung H, Ferlay J, Siegel R L, Laversanne M, Soerjomataram I, Jemal A and Bray F. 2021. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: A Cancer Journal for Clinicians, 71(3): 209-249 [DOI10.3322/caac.21660]
Wang Q T, Chen W H, Wu X M and Li Z G. 2020. Detail-enhanced multi-scale exposure fusion in YUV color space. IEEE Transactions on Circuits and Systems for Video Technology, 30(8): 2418-2429 [DOI: 10.1109/TCSVT.2019.2919310http://dx.doi.org/10.1109/TCSVT.2019.2919310]
Wang S H, Zheng J, Hu H M and Li B. 2013. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Transactions on Image Processing, 22(9): 3538-3548 [DOI: 10.1109/TIP.2013.2261309http://dx.doi.org/10.1109/TIP.2013.2261309]
Wei C, Wang W J, Yang W H and Liu J Y. 2018. Deep Retinex decomposition for low-light enhancement [EB/OL]. [2023-12-25]. https://arxiv.org/pdf/1808.04560.pdfhttps://arxiv.org/pdf/1808.04560.pdf
Woo S, Park J, Lee J Y and Kweon I S. 2018. CBAM: convolutional block attention module [EB/OL]. [2023-12-25]. https://arxiv.org/pdf/1807.06521.pdfhttps://arxiv.org/pdf/1807.06521.pdf
Yan X Y, Wang H K, Hou X S and Dun Y J. 2023. Dual-branch low-light image enhancement network via YCbCr space divide-and-conquer. Journal of Image and Graphics, 28(11): 3415-3427
闫晓阳, 王华珂, 侯兴松, 顿玉洁. 2023. YCbCr空间分治的双分支低照度图像增强网络. 中国图象图形学报, 28(11): 3415-3427 [DOI: 10.11834/jig.221028http://dx.doi.org/10.11834/jig.221028]
Ying Z Q, Li G and Gao W. 2017. A bio-inspired multi-exposure fusion framework for low-light image enhancement[EB/OL]. [2023-10-12]. https://arxiv.org/pdf/1711.00591.pdfhttps://arxiv.org/pdf/1711.00591.pdf
Yuan X L, Guo L J, Liu W, Zeng X H, Mou Y, Bai S, Pan Z G, Zhang T, Pu W F, Wen C, Wang J, Zhou Z D, Feng J and Hu B. 2022. Artificial intelligence for detecting superficial esophageal squamous cell carcinoma under multiple endoscopic imaging modalities: a multicenter study. Journal of Gastroenterology and Hepatology, 37(1): 169-178 [DOI10.1111/jgh.15689]
Zhang Y H, Zhang J W and Guo X J. 2019. Kindling the darkness: a practical low-light image enhancer [EB/OL]. [2023-12-25]. https://arxiv.org/pdf/1905.04161.pdfhttps://arxiv.org/pdf/1905.04161.pdf
Zhao Z J, Lin H X, Shi D M and Zhou G Q. 2024. A non-regularization self-supervised Retinex approach to low-light image enhancement with parameterized illumination estimation. Pattern Recognition, 146: #110025 [DOI10.1016/j.patcog.2023.110025]
Zhao Z J, Xiong B S, Wang L, Ou Q F, Yu L and Kuang F. 2022. RetinexDIP: a unified deep framework for low-light image enhancement. IEEE Transactions on Circuits and Systems for Video Technology, 32(3): 1076-1088 [DOI: 10.1109/TCSVT.2021.3073371http://dx.doi.org/10.1109/TCSVT.2021.3073371]
Zhou Z R, Shi Z H and Ren W Q. 2023. Linear contrast enhancement network for low-illumination image enhancement. IEEE Transactions on Instrumentation and Measurement, 72: #5002916 [DOI: 10.1109/TIM.2022.3232641http://dx.doi.org/10.1109/TIM.2022.3232641]
相关作者
相关机构