恶劣场景下视觉感知与理解综述
Visual perception and understanding in degraded scenarios
- 2024年29卷第6期 页码:1667-1684
纸质出版日期: 2024-06-16
DOI: 10.11834/jig.240041
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2024-06-16 ,
移动端阅览
汪文靖, 杨文瀚, 方玉明, 黄华, 刘家瑛. 2024. 恶劣场景下视觉感知与理解综述. 中国图象图形学报, 29(06):1667-1684
Wang Wenjing, Yang Wenhan, Fang Yuming, Huang Hua, Liu Jiaying. 2024. Visual perception and understanding in degraded scenarios. Journal of Image and Graphics, 29(06):1667-1684
恶劣场景下采集的图像与视频数据存在复杂的视觉降质,一方面降低视觉呈现与感知体验,另一方面也为视觉分析理解带来了很大困难。为此,系统地分析了国际国内近年恶劣场景下视觉感知与理解领域的重要研究进展,包括图像视频与降质建模、恶劣场景视觉增强、恶劣场景下视觉分析理解等技术。其中,视觉数据与降质建模部分探讨了不同降质场景下的图像视频与降质过程建模方法,涵盖噪声建模、降采样建模、光照建模和雨雾建模。传统恶劣场景视觉增强部分探讨了早期非深度学习的视觉增强算法,包括直方图均衡化、视网膜大脑皮层理论和滤波方法等。基于深度学习模型的恶劣场景视觉增强部分则以模型架构创新的角度进行梳理,探讨了卷积神经网络、Transformer模型和扩散模型等架构。不同于传统视觉增强的目标为全面提升人眼对图像视频的视觉感知效果,新一代视觉增强及分析方法考虑降质场景下机器视觉对图像视频的理解性能。恶劣场景下视觉理解技术部分探讨了恶劣场景下视觉理解数据集和基于深度学习模型的恶劣场景视觉理解,以及恶劣场景下视觉增强与理解协同计算。论文详细综述了上述研究的挑战性,梳理了国内外技术发展脉络和前沿动态。最后,根据上述分析展望了恶劣场景下视觉感知与理解的发展方向。
Visual media such as images and videos are crucial means for humans to acquire, express, and convey information. The widespread application of foundational technologies like artificial intelligence and big data has facilitated the gradual integration of systems for the perception and understanding of images and videos into all aspects of production and daily life. However, the emergence of massive applications also brings challenges. Specifically, in open environments, various applications generate vast amounts of heterogeneous data, which leads to complex visual degradation in images and videos. For instance, adverse weather conditions like heavy fog can reduce visibility, which results in the loss of details. Data captured in rainy or snowy weather can exhibit deformations in objects or individuals due to raindrops, which result in structured noise. Low-light conditions can cause severe loss of details and structured information in images. Visual degradation not only diminishes the visual presentation and perceptual experience of images and videos but also significantly affects the usability and effectiveness of existing visual analysis and understanding systems. In today’s era of intelligence and information technology, with explosive growth in visual media data, especially in challenging scenarios, visual perception and understanding technologies hold significant scientific significance and practical value. Traditional visual enhancement techniques can be divided into two methods: spatial domain-based and frequency domain-based. Spatial domain methods directly process 2D spatial data, including grayscale transformation, histogram transformation, and spatial domain filtering. Frequency domain methods transform data into the frequency domain through models, like Fourier transform, for processing and then restore it to the spatial domain. The development of computer vision technology has facilitated the emergence of more well-designed and robust visual enhancement algorithms, such as dehazing algorithms based on dark channel priors. Since 2010s, the rapid advancement in artificial intelligence technology has enabled the development of many visual enhancement methods based on deep learning models. These methods not only can reconstruct damaged visual information but also can further improve the visual presentation, which comprehensively enhances the visual perceptual experience of images and videos captured in challenging scenarios. As computer vision technology becomes more widespread, intelligent visual analysis and understanding are penetrating various aspects of society, such as face recognition and autonomous driving. However, visual enhancement in traditional digital image processing frameworks mainly focuses on improving visual effects, which ignores the impact on high-level analysis tasks. This oversight severely reduces the usability and effectiveness of existing visual understanding systems. In recent years, several visual understanding datasets for challenging scenarios have been established, which leads to the development of numerous visual analysis and understanding algorithms for these scenarios. Domain transfer methods from ordinary scenes to challenging scenes are gaining attention in further reducing reliance on datasets. Coordinating and optimizing the relationship between visual perception and visual presentation, which are two different task objectives, are also important research problems in the field of visual computing. To address the development needs of the visual computing field in challenging scenarios, this study extensively reviews the challenges of the aforementioned research, outlines the developmental trends, and explores the cutting-edge dynamics. Specifically, this study reviews the technologies related to visual degradation modeling, visual enhancement, and visual analysis and understanding in challenging scenarios. In the section on visual data and degradation modeling, various methods for modeling image and video degradation processes in different degradation scenarios are discussed. These methods include noise modeling, downsampling modeling, illumination modeling, and rain and fog modeling. For noise modeling, Poissonian-Gaussian noise modeling is the most commonly used. For downsampling modeling, classical methods include bicubic interpolation and blurring kernel. Noise including JPEG compression is also considered. A recent comprehensive model jointly uses blurring, downsampling, and noise. For illumination modeling, the Retinex theory is one of the most widely used. It decomposes images into illumination and reflectance. For rain and fog modeling, images are generally decomposed into rain and background layers. In the traditional visual enhancement section, numerous visual enhancement algorithms have been developed to address the degradation of image and video information in adverse scenarios. Early algorithms often employed simple strategies, such as super-resolution methods primarily based on interpolation techniques. However, these methods are constrained by linear models and struggle to restore high-frequency details. Researchers have proposed more sophisticated algorithms to address the complex degradation issues in adverse scenarios. These algorithms include techniques such as histogram equalization, Retinex theory, and filtering methods. Deep neural networks have shown remarkable performance in various fields such as image classification, object detection, and facial recognition. Simultaneously, in low-level computer vision tasks such as super-resolution, style transfer, color conversion, and texture transfer, they also demonstrate excellent performance. With the continuous evolution of deep neural network frameworks, researchers have proposed diverse visual enhancement methods. The section on visual enhancement based on deep learning models takes an innovative approach to model architecture. It discusses architectures like convolutional neural networks, Transformer models, and diffusion models. Unlike traditional visual enhancement, which aims to comprehensively improve human visual perception of images and videos, the new generation of visual enhancement and analysis methods considers the interpretive performance of machine vision in degraded scenarios. The section on visual understanding technology in challenging scenarios discusses visual understanding and its corresponding datasets in challenging scenarios based on deep learning models. It also explores the collaborative computation of visual enhancement and understanding in challenging scenarios. Finally, based on the analysis, it provides prospects for the future development of visual perception and understanding in adverse scenarios. When facing complex degradation scenarios, real-world images may be simultaneously influenced by various factors such as heavy rain and fog, dynamic changes in lighting, low-light environments, and image corruption. This condition requires models to handle unknown and diverse image features. The current challenge lies in the fact that most existing models are designed for specific degradation scenarios. This complexity introduces a significant amount of prior knowledge and causes difficulty in adapting to other degradation scenarios. The construction of existing visual understanding models in adverse scenarios relies on downstream task information, including target domain data distribution, degradation priors, and pre-trained models for downstream tasks. This reliance causes difficulty in achieving robustness for arbitrary tasks and analysis models. Moreover, most methods are limited to a specific machine analysis downstream task and cannot generalize to new downstream task scenarios. Finally, in recent years, large models have achieved significant accomplishments in various fields. Currently, many studies have demonstrated unprecedented potential for large models in tasks like enhancing reconstruction and other low-level computational visual tasks. However, the high complexity of large models also presents challenges, including substantial computational resource requirements, long training times, and difficulties in model optimization. At the same time, the generalization capability of models in adverse scenarios is a pressing challenge that requires more comprehensive data construction strategies and more effective model optimization methods. How to improve the performance and reliability of large visual models in visual perception and understanding tasks in adverse scenarios is a key problem that remains unsolved.
恶劣场景视觉感知视觉理解图像视频增强图像视频处理深度学习
adverse scenariosvisual perceptionvisual understandingimage and video enhancementimage and video processingdeep learning
Abdelhamed A, Brubaker M and Brown M. 2019. Noise flow: noise modeling with conditional normalizing flows//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 3165-3173 [DOI: 10.1109/ICCV.2019.00326http://dx.doi.org/10.1109/ICCV.2019.00326]
Barnum P C, Narasimhan S and Kanade T. 2010. Analysis of rain and snow in frequency space. International Journal of Computer Vision, 86(2/3): 256-274 [DOI: 10.1007/s11263-008-0200-2http://dx.doi.org/10.1007/s11263-008-0200-2]
Ben-David S, Blitzer J, Crammer K, Kulesza A, Pereira F and Vaughan J W. 2010. A theory of learning from different domains. Machine Learning, 79(1/2): 151-175 [DOI: 10.1007/s10994-009-5152-4http://dx.doi.org/10.1007/s10994-009-5152-4]
Braun M, Krebs S, Flohr F and Gavrila D M. 2019. EuroCity persons: a novel benchmark for person detection in traffic scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(8): 1844-1861 [DOI: 10.1109/TPAMI.2019.2897684http://dx.doi.org/10.1109/TPAMI.2019.2897684]
Cao Y, Liu M, Liu S, Wang X T, Lei L and Zuo W M. 2023. Physics-guided ISO-dependent sensor noise modeling for extreme low-light photography//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE: 5744-5753 [DOI: 10.1109/CVPR52729.2023.00556http://dx.doi.org/10.1109/CVPR52729.2023.00556]
Chang Y, Yan L X and Zhong S. 2017. Transformed low-rank model for line pattern noise removal//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 1735-1743 [DOI: 10.1109/ICCV.2017.191http://dx.doi.org/10.1109/ICCV.2017.191]
Chen D Y, Chen C C and Kang L W. 2014. Visual depth guided color image rain streaks removal using sparse coding. IEEE Transactions on Circuits and Systems for Video Technology, 24(8): 1430-1455 [DOI: 10.1109/TCSVT.2014.2308627http://dx.doi.org/10.1109/TCSVT.2014.2308627]
Chen J, Tan C H, Hou J H, Chau L P and Li H. 2018a. Robust video content alignment and compensation for rain removal in a CNN framework//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 6286-6295 [DOI: 10.1109/CVPR.2018.00658http://dx.doi.org/10.1109/CVPR.2018.00658]
Chen Y H, Li W, Sakaridis C, Dai D X and Van Gool L. 2018b. Domain adaptive faster R-CNN for object detection in the wild//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 3339-3348 [DOI: 10.1109/CVPR.2018.00352http://dx.doi.org/10.1109/CVPR.2018.00352]
Chen Y L and Hsu C T. 2013. A generalized low-rank appearance model for spatio-temporally correlated rain streaks//Proceedings of 2013 IEEE International Conference on Computer Vision. Sydney, Australia: IEEE: 1968-1975 [DOI: 10.1109/ICCV.2013.247http://dx.doi.org/10.1109/ICCV.2013.247]
Cui Z T, Qi G J, Gu L, You S D, Zhang Z H and Harada T. 2021. Multitask AET with orthogonal tangent regularity for dark object detection//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 2533-2542 [DOI: 10.1109/ICCV48922.2021.00255http://dx.doi.org/10.1109/ICCV48922.2021.00255]
Dai D X and Van Gool L. 2018. Dark model adaptation: semantic image segmentation from daytime to nighttime//Proceedings of the 21st International Conference on Intelligent Transportation Systems. Maui, USA: IEEE: 3819-3824 [DOI: 10.1109/ITSC.2018.8569387http://dx.doi.org/10.1109/ITSC.2018.8569387]
Devlin J, Chang M W, Lee K and Toutanova K. 2019. BERT: pre-training of deep bidirectional Transformers for language understanding//Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Minneapolis, USA: Association for Computational Linguistics: 4171-4186 [DOI: 10.18653/V1/N19-1423http://dx.doi.org/10.18653/V1/N19-1423]
D’Innocente A, Borlino F C, Bucci S, Caputo B and Tommasi T. 2020. One-shot unsupervised cross-domain detection//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: IEEE: 732-748 [DOI: 10.1007/978-3-030-58517-4_43http://dx.doi.org/10.1007/978-3-030-58517-4_43]
Dong C, Loy C C, He K and Tang X. 2014. Learning a deep convolutional network for image super-resolution//Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer: 184-199 [DOI: 10.1007/978-3-319-10593-2_13http://dx.doi.org/10.1007/978-3-319-10593-2_13]
Dudhane A, Zamir S W, Khan S, Khan F S and Yang M H. 2023. Burstormer: burst image restoration and enhancement Transformer//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE: 5703-5712 [DOI: 10.1109/CVPR52729.2023.00552http://dx.doi.org/10.1109/CVPR52729.2023.00552]
Eigen D, Krishnan D and Fergus R. 2013. Restoring an image taken through a window covered with dirt or rain//Proceedings of 2013 IEEE International Conference on Computer Vision. Sydney, Australia: IEEE: 633-640 [DOI: 10.1109/ICCV.2013.84http://dx.doi.org/10.1109/ICCV.2013.84]
Foi A, Trimeche M, Katkovnik V and Egiazarian K. 2008. Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data. IEEE Transactions on Image Processing, 17(10): 1737-1754 [DOI: 10.1109/TIP.2008.2001399http://dx.doi.org/10.1109/TIP.2008.2001399]
Fu X Y, Huang J B, Zeng D L, Huang Y, Ding X H and Paisley J. 2017. Removing rain from single images via a deep detail network//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 1715-1723 [DOI: 10.1109/CVPR.2017.186http://dx.doi.org/10.1109/CVPR.2017.186]
Fu X Y, Zeng D L, Huang Y, Zhang X P and Ding X H. 2016. A weighted variational model for simultaneous reflectance and illumination estimation//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 2782-2790 [DOI: 10.1109/CVPR.2016.304http://dx.doi.org/10.1109/CVPR.2016.304]
Fu Y H, Kang L W, Lin C W and Hsu C T. 2011. Single-frame-based rain removal via image decomposition//Proceedings of 2011 IEEE International Conference on Acoustics, Speech and Signal Processing. Prague, Czech Republic: IEEE: 1453-1456 [DOI: 10.1109/ICASSP.2011.5946766http://dx.doi.org/10.1109/ICASSP.2011.5946766]
Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, Marchand, M and Lempitsky V S. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(1): 2096-2030 [DOI: 10.5555/2946645.2946704http://dx.doi.org/10.5555/2946645.2946704]
Garg K and Nayar S K. 2007. Vision and rain. International Journal of Computer Vision, 75(1): 3-27 [DOI: 10.1007/s11263-006-0028-6http://dx.doi.org/10.1007/s11263-006-0028-6]
Guo M X, Chen M T, Ma C, Li Y, Li X F and Xie X D. 2020. High-level task-driven single image deraining: segmentation in rainy days//Proceedings of the Neural Information Processing. Bangkok, Thailand: Springer: 350-362 [DOI: 10.1007/978-3-030-63830-6_30http://dx.doi.org/10.1007/978-3-030-63830-6_30]
Guo X J, Li Y and Ling H B. 2017. LIME: low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing, 26(2): 982-993 [DOI: 10.1109/TIP.2016.2639450http://dx.doi.org/10.1109/TIP.2016.2639450]
Halder S, Lalonde J F and De Charette R. 2019. Physics-based rendering for improving robustness to rain//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 10202-10211 [DOI: 10.1109/ICCV.2019.01030http://dx.doi.org/10.1109/ICCV.2019.01030]
He K M, Sun J and Tang X O. 2009. Single image haze removal using dark channel prior//Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, USA: IEEE: 1956-1963 [DOI: 10.1109/CVPR.2009.5206515http://dx.doi.org/10.1109/CVPR.2009.5206515]
Hoffman J, Tzeng E, Park T, Zhu J Y, Isola P, Saenko K, Efros A A and Darrell T. 2018. CyCADA: cycle-consistent adversarial domain adaptation//Proceedings of the 35th International Conference on Machine Learning. Stockholm, Sweden: PMLR: 1989-1998
Hu X W, Fu C W, Zhu L and Heng P A. 2019. Depth-attentional features for single-image rain removal//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 8014-8023 [DOI: 10.1109/CVPR.2019.00821http://dx.doi.org/10.1109/CVPR.2019.00821]
Huang S C, Le T H and Jaw D W. 2021. DSNet: joint semantic learning for object detection in inclement weather conditions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(8): 2623-2633 [DOI: 10.1109/TPAMI.2020.2977911http://dx.doi.org/10.1109/TPAMI.2020.2977911]
Hwang S, Park J, Kim N, Choi Y and Kweon I S. 2015. Multispectral pedestrian detection: benchmark dataset and baseline//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE: 1037-1045 [DOI: 10.1109/CVPR.2015.7298706http://dx.doi.org/10.1109/CVPR.2015.7298706]
Inoue N, Furuta R, Yamasaki T and Aizawa K. 2018. Cross-domain weakly-supervised object detection through progressive domain adaptation//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 5001-5009 [DOI: 10.1109/CVPR.2018.00525http://dx.doi.org/10.1109/CVPR.2018.00525]
Jenicek T and Chum O. 2019. No fear of the dark: image retrieval under varying illumination conditions//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 9695-9703 [DOI: 10.1109/ICCV.2019.00979http://dx.doi.org/10.1109/ICCV.2019.00979]
Jiang K, Wang Z Y, Yi P, Chen C, Huang B J, Luo Y M, Ma J Y and Jiang J J. 2020. Multi-scale progressive fusion network for single image deraining//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 8343-8352 [DOI: 10.1109/CVPR42600.2020.00837http://dx.doi.org/10.1109/CVPR42600.2020.00837]
Jobson D J, Rahman Z and Woodell G A. 1997. Properties and performance of a center/surround retinex. IEEE Transactions on Image Processing, 6(3): 451-462 [DOI: 10.1109/83.557356http://dx.doi.org/10.1109/83.557356]
Kang L W, Lin C W and Fu Y H. 2012. Automatic single-image-based rain streaks removal via image decomposition. IEEE Transactions on Image Processing, 21(4): 1742-1755 [DOI: 10.1109/TIP.2011.2179057http://dx.doi.org/10.1109/TIP.2011.2179057]
Kawar B, Elad M, Ermon S and Song J M. 2022. Denoising diffusion restoration models [EB/OL]. [2024-01-01]. https://arxiv.org/pdf/2201.11793.pdfhttps://arxiv.org/pdf/2201.11793.pdf
Ketcham D J, Lowe R W and Weber J W. 1974. Image Enhancement Techniques for Cockpit Displays. Hughes Aircraft Co Culver City Calif Display Systems Lab
Khodabandeh M, Vahdat A, Ranjbar M and Macready W. 2019. A robust learning approach to domain adaptive object detection//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 480-490 [DOI: 10.1109/ICCV.2019.00057http://dx.doi.org/10.1109/ICCV.2019.00057]
Kim J H, Lee C, Sim J Y and Kim C S. 2013. Single-image deraining using an adaptive nonlocal means filter//Proceedings of 2013 IEEE International Conference on Image Processing. Melbourne, Australia: IEEE: 914-917 [DOI: 10.1109/ICIP.2013.6738189http://dx.doi.org/10.1109/ICIP.2013.6738189]
Kim S, Choi J, Kim T and Kim C. 2019a. Self-training and adversarial background regularization for unsupervised domain adaptive one-stage object detection//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 6091-6100 [DOI: 10.1109/ICCV.2019.00619http://dx.doi.org/10.1109/ICCV.2019.00619]
Land E H and McCann J J. 1971. Lightness and retinex theory. Journal of the Optical Society of America, 61(1): 1-11 [DOI: 10.1364/josa.61.000001http://dx.doi.org/10.1364/josa.61.000001]
Lengyel A, Garg S, Milford M and van Gemert J C. 2021. Zero-shot day-night domain adaptation with a physics prior//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 4379-4389 [DOI: 10.1109/ICCV48922.2021.00436http://dx.doi.org/10.1109/ICCV48922.2021.00436]
Li B Y, Ren W Q, Fu D P, Tao D C, Feng D, Zeng W J and Wang Z Y. 2019. Benchmarking single-image dehazing and beyond. IEEE Transactions on Image Processing, 28(1): 492-505 [DOI: 10.1109/TIP.2018.2867951http://dx.doi.org/10.1109/TIP.2018.2867951]
Li C Y, Guo J C, Porikli F and Pang Y W. 2018a. LightenNet: a convolutional neural network for weakly illuminated image enhancement. Pattern Recognition Letters, 104: 15-22 [DOI: 10.1016/j.patrec.2018.01.010http://dx.doi.org/10.1016/j.patrec.2018.01.010]
Li G B, He X, Zhang W, Chang H Y, Dong L and Lin L. 2018b. Non-locally enhanced encoder-decoder network for single image de-raining//Proceedings of the 26th ACM International Conference on Multimedia. Seoul Korea (South): ACM: 1056-1064 [DOI: 10.1145/3240508.3240636http://dx.doi.org/10.1145/3240508.3240636]
Li Y, Chang Y, Yu C F and Yan L X. 2022. Close the loop: a unified bottom-up and top-down paradigm for joint image deraining and segmentation//Proceedings of the 36th AAAI Conference on Artificial Intelligence. Virtual: AAAI: 1438-1446 [DOI: 10.1609/aaai.v36i2.20033http://dx.doi.org/10.1609/aaai.v36i2.20033]
Liang J Y, Cao J Z, Sun G L, Zhang K, Van Gool L and Timofte R. 2021. SwinIR: image restoration using Swin Transformer//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision Workshops. Montreal, Canada: IEEE: 1833-1844 [DOI: 10.1109/ICCVW54120.2021.00210http://dx.doi.org/10.1109/ICCVW54120.2021.00210]
Liang Y C, Anwar S and Liu Y. 2022. DRT: a lightweight single image deraining recursive Transformer//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. New Orleans, USA: IEEE: 588-597 [DOI: 10.1109/CVPRW56347.2022.00074http://dx.doi.org/10.1109/CVPRW56347.2022.00074]
Liu C and Sun D Q. 2014. On Bayesian adaptive video super resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(2): 346-360 [DOI: 10.1109/TPAMI.2013.127http://dx.doi.org/10.1109/TPAMI.2013.127]
Liu J Y, Yang W H, Yang S and Guo Z M. 2018. Erase or fill? Deep joint recurrent rain removal and reconstruction in videos//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 3233-3242 [DOI: 10.1109/CVPR.2018.00341http://dx.doi.org/10.1109/CVPR.2018.00341]
Liu R S, Ma L, Zhang J A, Fan X and Luo Z X. 2021. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 10556-10565 [DOI: 10.1109/CVPR46437.2021.01042http://dx.doi.org/10.1109/CVPR46437.2021.01042]
Loh Y P and Chan C S. 2019. Getting to know low-light images with the exclusively dark dataset. Computer Vision and Image Understanding, 178: 30-42 [DOI: 10.1016/j.cviu.2018.10.010http://dx.doi.org/10.1016/j.cviu.2018.10.010]
Long M S, Cao Y, Wang J M and Jordan M I. 2015. Learning transferable features with deep adaptation networks//Proceedings of the 32nd International Conference on Machine Learning. Lille, France: JMLR.org: 97-105
Long M S, Zhu H, Wang J M and Jordan M I. 2017. Deep transfer learning with joint adaptation networks//Proceedings of the 34th International Conference on Machine Learning. Sydney, Australia: JMLR.org: 2208-2217
Lore K G, Akintayo A and Sarkar S. 2017. LLNet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognition, 61: 650-662 [DOI: 10.1016/j.patcog.2016.06.008http://dx.doi.org/10.1016/j.patcog.2016.06.008]
Lu K and Zhang L H. 2021. TBEFN: a two-branch exposure-fusion network for low-light image enhancement. IEEE Transactions on Multimedia, 23: 4093-4105 [DOI: 10.1109/TMM.2020.3037526http://dx.doi.org/10.1109/TMM.2020.3037526]
Luo R D, Wang W J, Yang W H and Liu J Y. 2023. Similarity min-max: zero-shot day-night domain adaptation//Proceedings of 2023 IEEE/CVF International Conference on Computer Vision. Paris, France: IEEE: 8070-8080 [DOI: 10.1109/ICCV51070.2023.00744http://dx.doi.org/10.1109/ICCV51070.2023.00744]
Luo Y, Xu Y and Ji H. 2015. Removing rain from a single image via discriminative sparse coding//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 3397-3405 [DOI: 10.1109/ICCV.2015.388http://dx.doi.org/10.1109/ICCV.2015.388]
Lyu F F, Lu F, Wu J H and Lim C. 2018. MBLLEN: low-light image/video enhancement using CNNs//Proceedings of the British Machine Vision Conference. Newcastle, UK: BMVA: #220
Ma L, Ma T Y, Liu R S, Fan X and Luo Z X. 2022. Toward fast, flexible, and robust low-light image enhancement//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE: 5627-5636 [DOI: 10.1109/CVPR52688.2022.00555http://dx.doi.org/10.1109/CVPR52688.2022.00555]
Mu P, Chen J, Liu R S, Fan X and Luo Z X. 2019. Learning bilevel layer priors for single image rain streaks removal. IEEE Signal Processing Letters, 26(2): 307-311 [DOI: 10.1109/LSP.2018.2889277http://dx.doi.org/10.1109/LSP.2018.2889277]
Nada H, Sindagi V A, Zhang H and Patel V M. 2018. Pushing the limits of unconstrained face detection: a challenge dataset and baseline results//Proceedings of the 9th IEEE International Conference on Biometrics Theory, Applications and Systems. Redondo Beach, USA: IEEE: 1-10 [DOI: 10.1109/BTAS.2018.8698561http://dx.doi.org/10.1109/BTAS.2018.8698561]
Neuhold G, Ollmann T, Bulò S R and Kontschieder P. 2017. The mapillary vistas dataset for semantic understanding of street scenes//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 5000-5009 [DOI: 10.1109/ICCV.2017.534http://dx.doi.org/10.1109/ICCV.2017.534]
Neumann L, Karg M, Zhang S S, Scharfenberger C, Piegert E, Mistr S, Prokofyeva O, Thiel R, Vedaldi A, Zisserman A and Schiele B. 2019. NightOwls: a pedestrians at night dataset//Proceedings of the 14th Asian Conference on Computer Vision. Perth, Australia: Springer: 691-705 [DOI: 10.1007/978-3-030-20887-5_43http://dx.doi.org/10.1007/978-3-030-20887-5_43]
Nguyen C M, Chan E R, Bergman A W and Wetzstein G. 2023. Diffusion in the dark: a diffusion model for low-light text recognition [EB/OL]. [2024-01-01]. https://arxiv.org/pdf/2303.04291.pdfhttps://arxiv.org/pdf/2303.04291.pdf
O’Handley D A and Green W B. 1972. Recent developments in digital image processing at the image processing laboratory at the jet propulsion laboratory. Proceedings of the IEEE, 60(7): 821-828 [DOI: 10.1109/PROC.1972.8781http://dx.doi.org/10.1109/PROC.1972.8781]
Park W J and Lee K H. 2008. Rain removal using Kalman filter in video//Proceedings of 2008 International Conference on Smart Manufacturing Application. Goyangi, Korea (South): IEEE: 494-497 [DOI: 10.1109/ICSMA.2008.4505573http://dx.doi.org/10.1109/ICSMA.2008.4505573]
Pizer S M, Johnston R E, Ericksen J P, Yankaskas B C and Muller K E. 1990. Contrast-limited adaptive histogram equalization: speed and effectiveness//Proceedings of the 1st Conference on Visualization in Biomedical Computing. Atlanta, USA: IEEE: 337-345 [DOI: 10.1109/VBC.1990.109340http://dx.doi.org/10.1109/VBC.1990.109340]
Qian K, Zhu S L, Zhang X Y and Li L E. 2021. Robust multimodal vehicle detection in foggy weather using complementary lidar and radar signals//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 444-453 [DOI: 10.1109/CVPR46437.2021.00051http://dx.doi.org/10.1109/CVPR46437.2021.00051]
Qian R, Tan R T, Yang W H, Su J J and Liu J Y. 2018. Attentive generative adversarial network for raindrop removal from a single image//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 2482-2491 [DOI: 10.1109/CVPR.2018.00263http://dx.doi.org/10.1109/CVPR.2018.00263]
Quan R J, Yu X, Liang Y Z and Yang Y. 2021. Removing raindrops and rain streaks in one go//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 9143-9152 [DOI: 10.1109/CVPR46437.2021.00903http://dx.doi.org/10.1109/CVPR46437.2021.00903]
Ren D W, Zuo W M, Hu Q H, Zhu P F and Meng D Y. 2019a. Progressive image deraining networks: a better and simpler baseline//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 3932-3941 [DOI: 10.1109/CVPR.2019.00406http://dx.doi.org/10.1109/CVPR.2019.00406]
Ren W Q, Liu S F, Ma L, Xu Q Q, Xu X Y, Cao X C, Du J P and Yang M H. 2019b. Low-light image enhancement via a deep hybrid network. IEEE Transactions on Image Processing, 28(9): 4364-4375 [DOI: 10.1109/TIP.2019.2910412http://dx.doi.org/10.1109/TIP.2019.2910412]
Russo P, Carlucci F M, Tommasi T and Caputo B. 2018. From source to target and back: symmetric bi-directional adaptive GAN//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 8099-8108 [DOI: 10.1109/CVPR.2018.00845http://dx.doi.org/10.1109/CVPR.2018.00845]
Saharia C, Ho J, Chan W, Salimans T, Fleet D and Norouzi M. 2023. Image super-resolution via iterative refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4): 4713-4726 [DOI: 10.1109/TPAMI.2022.3204461http://dx.doi.org/10.1109/TPAMI.2022.3204461]
Saito K, Ushiku Y, Harada T and Saenko K. 2019. Strong-weak distribution alignment for adaptive object detection//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 6949-6958 [DOI: 10.1109/CVPR.2019.00712http://dx.doi.org/10.1109/CVPR.2019.00712]
Sakaridis C, Dai D X and Van Gool L. 2018. Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 126(9): 973-992 [DOI: 10.1007/s11263-018-1072-8http://dx.doi.org/10.1007/s11263-018-1072-8]
Sakaridis C, Dai D X and Van Gool L. 2019. Guided curriculum model adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 7373-7382 [DOI: 10.1109/ICCV.2019.00747http://dx.doi.org/10.1109/ICCV.2019.00747]
Sakaridis C, Dai D X and Van Gool L. 2021. ACDC: the adverse conditions dataset with correspondences for semantic driving scene understanding//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 10745-10755 [DOI: 10.1109/ICCV48922.2021.01059http://dx.doi.org/10.1109/ICCV48922.2021.01059]
Sasagawa Y and Nagahara H. 2020. YOLO in the dark - domain adaptation method for merging multiple models//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: IEEE: 345-359 [DOI: 10.1007/978-3-030-58589-1_21http://dx.doi.org/10.1007/978-3-030-58589-1_21]
Shan Y H, Lu W F and Chew C M. 2019. Pixel and feature level based domain adaptation for object detection in autonomous driving. Neurocomputing, 367: 31-38 [DOI: 10.1016/j.neucom.2019.08.022http://dx.doi.org/10.1016/j.neucom.2019.08.022]
Shen L, Yue Z H, Feng F, Chen Q, Liu S H and Ma J. 2017. MSR-net: low-light image enhancement using deep convolutional network [EB/OL]. [2024-01-01]. https://arxiv.org/pdf/1711.02488.pdfhttps://arxiv.org/pdf/1711.02488.pdf
Shocher A, Cohen N and Irani M. 2018. Zero-shot super-resolution using deep internal learning//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 3118-3126 [DOI: 10.1109/CVPR.2018.00329http://dx.doi.org/10.1109/CVPR.2018.00329]
Sindagi V A, Oza P, Yasarla R and Patel V M. 2020. Prior-based domain adaptive object detection for hazy and rainy conditions//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: IEEE: 763-780 [DOI: 10.1007/978-3-030-58568-6_45http://dx.doi.org/10.1007/978-3-030-58568-6_45]
Song W Z, Suganuma M, Liu X, Shimobayashi N, Maruta D and Okatani T. 2021. Matching in the dark: a dataset for matching image pairs of low-light scenes//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 6009-6018 [DOI: 10.1109/ICCV48922.2021.00597http://dx.doi.org/10.1109/ICCV48922.2021.00597]
Sun B C and Saenko K. 2016. Deep CORAL: correlation alignment for deep domain adaptation//Proceedings of 2016 European Conference on Computer Vision Workshops. Amsterdam, the Netherlands: IEEE: 443-450 [DOI: 10.1007/978-3-319-49409-8_35http://dx.doi.org/10.1007/978-3-319-49409-8_35]
Sun H, Ang M H and Rus D. 2019. A convolutional network for joint deraining and dehazing from a single image for autonomous driving in rain//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 962-969 [DOI: 10.1109/IROS40897.2019.8967644http://dx.doi.org/10.1109/IROS40897.2019.8967644]
Tan X, Xu K, Cao Y, Zhang Y H, Ma L Z and Lau R W H. 2021. Night-time scene parsing with a large real dataset. IEEE Transactions on Image Processing, 30: 9085-9098 [DOI: 10.1109/TIP.2021.3122004http://dx.doi.org/10.1109/TIP.2021.3122004]
Timofte R, Agustsson E, Gool L V, Yang M H, Zhang L, Lim B, Son S, Kim H, Nah S, Lee K M, Wang X T, Tian Y P, Yu K, Zhang Y L, Wu S X, Dong C, Lin L, Qiao Y, Loy C C, Bae W, Yoo J, Han Y, Ye J C, Choi J S, Kim M, Fan Y C, Yu J H, Han W, Liu D, Yu H C, Wang Z Y, Shi H H, Wang X C, Huang T S, Chen Y J, Zhang K, Zuo W M, Tang Z M, Luo L K, Li S H, Fu M, Cao L, Heng W, Bui G, Le T, Duan Y, Tao D C, Wang R X, Lin X, Pang J X, Xu J C, Zhao Y, Xu X Y, Pan J S, Sun D Q, Zhang Y J, Song X B, Dai Y C, Qin X Y, Huynh X P, Guo T T, Mousavi H S, Vu T H, Monga V, Cruz C, Egiazarian K, Katkovnik V, Mehta R, Jain A K, Agarwalla A, Praveen C V S, Zhou R F, Wen H D, Zhu C, Xia Z Q, Wang Z T and Guo Q. 2017. NTIRE 2017 challenge on single image super-resolution: methods and results//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, USA: IEEE: 1110-1121 [DOI: 10.1109/CVPRW.2017.149http://dx.doi.org/10.1109/CVPRW.2017.149]
Triantafyllidou D, Moran S, McDonagh S, Parisot S and Slabaugh G. 2020. Low light video enhancement using synthetic data produced with an intermediate domain mapping//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: IEEE: 103-119 [DOI: 10.1007/978-3-030-58601-0_7http://dx.doi.org/10.1007/978-3-030-58601-0_7]
Tzeng E, Hoffman J, Saenko K and Darrell T. 2017. Adversarial discriminative domain adaptation//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 2962-2971 [DOI: 10.1109/CVPR.2017.316http://dx.doi.org/10.1109/CVPR.2017.316]
Wang K, Zhang Z Y, Yan Z Q, Li X, Xu B B, Li J and Yang J. 2021a. Regularizing nighttime weirdness: efficient self-supervised monocular depth estimation in the dark//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 16035-16044 [DOI: 10.1109/ICCV48922.2021.01575http://dx.doi.org/10.1109/ICCV48922.2021.01575]
Wang X T, Xie L B, Dong C and Shan Y. 2021b. Real-ESRGAN: training real-world blind super-resolution with pure synthetic data//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision Workshops. Montreal, Canada: IEEE: 1905-1914 [DOI: 10.1109/ICCVW54120.2021.00217http://dx.doi.org/10.1109/ICCVW54120.2021.00217]
Wang Y L, Song Y B, Ma C and Zeng B. 2020. Rethinking image deraining via rain streaks and vapors//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: IEEE: 367-382 [DOI: 10.1007/978-3-030-58520-4_22http://dx.doi.org/10.1007/978-3-030-58520-4_22]
Wei C, Wang W J, Yang W H and Liu J Y. 2018. Deep retinex decomposition for low-light enhancement [EB/OL]. [2024-01-01]. https://arxiv.org/pdf/1808.04560.pdfhttps://arxiv.org/pdf/1808.04560.pdf
Wu X Y, Wu Z Y, Guo H, Ju L L and Wang S. 2021. DANNet: a one-stage domain adaptation network for unsupervised nighttime semantic segmentation//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 15764-15773 [DOI: 10.1109/CVPR46437.2021.01551http://dx.doi.org/10.1109/CVPR46437.2021.01551]
Xu J, Zhao W, Liu P and Tang X L. 2012. Removing rain and snow in a single image using guided filter//Proceedings of 2021 IEEE International Conference on Computer Science and Automation Engineering. Zhangjiajie, China: IEEE: 304-307 [DOI: 10.1109/CSAE.2012.6272780http://dx.doi.org/10.1109/CSAE.2012.6272780]
Yang W H, Tan R T, Feng J S, Liu J Y, Guo Z M and Yan S C. 2017. Deep joint rain detection and removal from a single image//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 1685-1694 [DOI: 10.1109/CVPR.2017.183http://dx.doi.org/10.1109/CVPR.2017.183]
Yang W H, Tan R T, Wang S Q and Liu J Y. 2020a. Self-learning video rain streak removal: when cyclic consistency meets temporal correspondence//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 1717-1726 [DOI: 10.1109/CVPR42600.2020.00179http://dx.doi.org/10.1109/CVPR42600.2020.00179]
Yang W H, Yuan Y, Ren W Q, Liu J Y, Scheirer W J, Wang Z Y, Zhang T H, Zhong Q Y, Xie D, Pu S L, Zheng Y Q, Qu Y Y, Xie Y H, Chen L, Li Z H, Hong C, Jiang H, Yang S Y, Liu Y, Qu X C, Wan P F, Zheng S, Zhong M H, Su T Y, He L Z, Guo Y D, Zhao Y, Zhu Z F, Liang J X, Wang J W, Chen T Y, Quan Y H, Xu Y, Liu B, Liu X, Sun Q, Lin T Y, Li X C, Lu F, Gu L, Zhou S D, Cao C, Zhang S F, Chi C, Zhuang C B, Lei Z, Li S Z, Wang S Z, Liu R Z, Yi D, Zuo Z M, Chi J N, Wang H, Wang K, Liu Y X, Gao X Y, Chen Z Y, Guo C, Li Y Z, Zhong H C, Huang J, Guo H, Yang J F, Liao W J, Yang J G, Zhou L G, Feng M Y and Qin L K. 2020b. Advancing image understanding in poor visibility environments: a collective benchmark study. IEEE Transactions on Image Processing, 29: 5737-5752 [DOI: 10.1109/TIP.2020.2981922http://dx.doi.org/10.1109/TIP.2020.2981922]
Yu F, Chen H F, Wang X, Xian W Q, Chen Y Y, Liu F C, Madhavan V and Darrell T. 2020. BDD100K: a diverse driving dataset for heterogeneous multitask learning//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 2633-2642 [DOI: 10.1109/CVPR42600.2020.00271http://dx.doi.org/10.1109/CVPR42600.2020.00271]
Yu Y, Yang W H, Tan Y P and Kot A C. 2022. Towards robust rain removal against adversarial attacks: a comprehensive benchmark analysis and beyond//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE: 6003-6012 [DOI: 10.1109/CVPR52688.2022.00592http://dx.doi.org/10.1109/CVPR52688.2022.00592]
Zamir S W, Arora A, Khan S, Hayat M, Khan F S and Yang M H. 2022. Restormer: efficient Transformer for high-resolution image restoration//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE: 5718-5729 [DOI: 10.1109/CVPR52688.2022.00564http://dx.doi.org/10.1109/CVPR52688.2022.00564]
Zhang H and Patel V M. 2018. Density-aware single image de-raining using a multi-stream dense network//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 695-704 [DOI: 10.1109/CVPR.2018.00079http://dx.doi.org/10.1109/CVPR.2018.00079]
Zhang H, Sindagi V and Patel V M. 2020. Image de-raining using a conditional generative adversarial network. IEEE Transactions on Circuits and Systems for Video Technology, 30(11): 3943-3956 [DOI: 10.1109/TCSVT.2019.2920407http://dx.doi.org/10.1109/TCSVT.2019.2920407]
Zhang K, Liang J Y, Van Gool L and Timofte R. 2021a. Designing a practical degradation model for deep blind image super-resolution//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 4771-4780 [DOI: 10.1109/ICCV48922.2021.00475http://dx.doi.org/10.1109/ICCV48922.2021.00475]
Zhang K, Zuo W M, Chen Y J, Meng D Y and Zhang L. 2017a. Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Transactions on Image Processing, 26(7): 3142-3155 [DOI: 10.1109/TIP.2017.2662206http://dx.doi.org/10.1109/TIP.2017.2662206]
Zhang K, Zuo W M, Gu S H and Zhang L. 2017b. Learning deep CNN denoiser prior for image restoration//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 2808-2817 [DOI: 10.1109/CVPR.2017.300http://dx.doi.org/10.1109/CVPR.2017.300]
Zhang K H, Luo W H, Yu Y J, Ren W Q, Zhao F, Li C S, Ma L, Liu W and Li H D. 2022. Beyond monocular deraining: parallel stereo deraining network via semantic prior. International Journal of Computer Vision, 130(7): 1754-1769 [DOI: 10.1007/s11263-022-01620-whttp://dx.doi.org/10.1007/s11263-022-01620-w]
Zhang X P, Li H, Qi Y Y, Leow W K and Ng T K. 2006. Rain removal in video by combining temporal and chromatic properties//Proceedings of 2006 IEEE International Conference on Multimedia and Expo. Toronto, Canada: IEEE: 461-464 [DOI: 10.1109/ICME.2006.262572http://dx.doi.org/10.1109/ICME.2006.262572]
Zhang Y, Guo X, Ma J, Liu W and Zhang J. 2021. Beyond brightening low-light images. International Journal of Computer Vision,129(4): 1013-1037 [DOI: 10.1007/s11263-020-01407-xhttp://dx.doi.org/10.1007/s11263-020-01407-x]
Zhang Y H, Zhang J W and Guo X J. 2019. Kindling the darkness: a practical low-light image enhancer//Proceedings of the 27th ACM International Conference on Multimedia. Nice, France: ACM: 1632-1640 [DOI: 10.1145/3343031.3350926http://dx.doi.org/10.1145/3343031.3350926]
Zheng S, Lu C J, Wu Y X and Gupta G. 2022. SAPNet: segmentation-aware progressive network for perceptual contrastive deraining//Proceedings of 2022 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops. Waikoloa, USA: IEEE: 52-62 [DOI: 10.1109/WACVW54805.2022.00011http://dx.doi.org/10.1109/WACVW54805.2022.00011]
Zheng X H, Liao Y H, Guo W, Fu X Y and Ding X H. 2013. Single-image-based rain and snow removal using multi-guided filter//Proceedings of the 20th Neural Information Processing. Daegu, Korea(South): Springer: 258-265 [DOI: 10.1007/978-3-642-42051-1_33http://dx.doi.org/10.1007/978-3-642-42051-1_33]
Zhu L, Fu C W, Lischinski D and Heng P A. 2017. Joint bi-layer optimization for single-image rain streak removal//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 2545-2553 [DOI: 10.1109/ICCV.2017.276http://dx.doi.org/10.1109/ICCV.2017.276]
相关作者
相关机构