融合通道注意力的跨尺度Transformer图像超分辨率重建
Cross-scale transformer image super-resolution reconstruction with fusion channel attention
- 2024年 页码:1-14
网络出版日期: 2024-09-03
DOI: 10.11834/jig.240279
移动端阅览
浏览全部资源
扫码关注微信
网络出版日期: 2024-09-03 ,
移动端阅览
李焱,董仕豪,张家伟等.融合通道注意力的跨尺度Transformer图像超分辨率重建[J].中国图象图形学报,
Li Yan,Dong Shihao,Zhang Jiawei,et al.Cross-scale transformer image super-resolution reconstruction with fusion channel attention[J].Journal of Image and Graphics,
目的
2
随着深度学习技术的发展,基于Transformer的网络架构被引入计算机视觉领域并取得了显著成效。针对在超分辨率任务中,Transformer模型存在特征提取模式单一、重建图像高频细节丢失和结构失真的问题,提出了一种融合通道注意力的跨尺度Transformer图像超分辨率重建模型。
方法
2
模型由四个模块组成:浅层特征提取、跨尺度深层特征提取、多级特征融合以及高质量重建模块。浅层特征提取利用卷积处理早期图像,获得更稳定的输出;跨尺度深层特征提取利用跨尺度Transformer和强化通道注意力机制,扩大感受野并通过加权筛选提取不同尺度特征以便融合;多级特征融合模块利用强化通道注意力机制,实现对不同尺度特征通道权重的动态调整,促进模型对丰富上下文信息的学习,增强模型在图像超分辨率重建任务中的能力。
结果
2
在Set5、Set14、BSD100、Urban100和Manga109标准数据集上的模型评估结果表明,相较于SwinIR超分辨率模型,所提模型在峰值信噪比上提高了0.06dB~0.25dB,且重建图像视觉效果更好。
结论
2
提出的融合通道注意力的跨尺度Transformer图像超分辨率重建模型,通过融合卷积特征与Transformer特征,并利用强化通道注意力机制减少图像中噪声和冗余信息,降低模型产生图像模糊失真的可能性,图像超分辨率性能有效提升,在多个公共实验数据集的测试结果验证了所提模型的有效性。
Objective
2
Image super-resolution reconstruction technique refers to a special method of converting from low-resolution(LR) images to high-resolution(HR) images in the same scene. In recent years, this technique has been widely used in computer vision, image processing and other fields due to its wide practical application value and far-reaching theoretical significance. In order to improve the reconstruction performance, although the model based on convolutional neural network has made great progress, most of the super-resolution network structure exists in the form of single-layer level end-to-end, which ignores the multi-layer level feature information in the process of network reconstruction and limits the reconstruction performance of the model. With the development of deep learning technology, Transformer-based network architectures have been introduced into the field of computer vision and have achieved significant results. Researchers have introduced it into the underlying vision tasks, and in the image super-resolution reconstruction task, the Transformer model suffers from a single feature extraction pattern, loss of high-frequency details in the reconstructed image, and structural distortion, and in order to solve these problems, we propose a cross-scale Transformer image super-resolution reconstruction model with fusion channel attention.
Method
2
The model consists of four modules: shallow feature extraction, cross-scale deep feature extraction, multilevel feature fusion, and a high-quality reconstruction module. Shallow feature extraction uses convolution to process early images to obtain more stable outputs, and the convolutional layer can provide stable optimisation and extraction results during early visual feature processing; the cross-scale deep feature extraction module uses the cross-scale Transformer and the enhanced channel attention mechanism to acquire features at different scales, the core of the cross-scale Transformer is the cross-scale self-attention mechanism and the gated convolutional feed-forward network, which downsamples the feature maps to different scales by scale factors, learns contextual information by using image self-similarity, and the gated convolutional network encodes spatial neighbouring pixel position information and helps to learn the local image structure, replacing the feedforward network in the traditional Transformer. A reinforced channel attention mechanism is used after the cross-scale Transformer to expand the sensory field and extract different scale features to replace the original features by weighted filtering for backward propagation. Considering that increasing the depth of the network will lead to saturation, we set the number of residual cross-scale Transformer blocks to 3 to maintain a balance between model complexity and super-resolution reconstruction performance. After stacking different scale features in the Multilevel Feature Fusion Module, we use the Enhanced Channel Attention mechanism to dynamically adjust the channel weights of different scale features to learn rich contextual information thus enhancing the network reconstruction capability. In the high-quality reconstruction module, we use convolutional layers and pixel blending methods to up-sample features to the corresponding dimensions o
f high-resolution images. In the training phase, we trained the model using 900 HR images from the DIV2K dataset, and the corresponding LR images were generated from the HR images using double-triple downsampling (with downsampling multiples of ×2, ×3 and ×4), and we optimise our network using Adam's algorithm with
<math id="M1"><msub><mrow><mi>L</mi></mrow><mrow><mn mathvariant="normal">1</mn></mrow></msub></math>
https://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=64885139&type=
https://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=64885138&type=
2.70933342
3.21733332
loss as our loss function.
Result
2
We performed tests on five standard datasets, Set5, Set14, BSD100, Urban100, and Manga109, and compared the performance of the proposed model with ten state-of-the-art models, including enhanced deep residual networks for single image super-resolution (EDSR), residual channel attention networks (RCAN), second-order attention network (SAN), cross-scale non-local attention (CSNLA), the cross-scale internal graph neural network (IGNN), holistic attention network (HAN), non-local sparse attention (NLSA), image restoration using swin transformer (SwinIR), efficient long-range attention network (ELAN), permuted self-attention(SRFormer). We use peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as metrics to measure the performance of these methods. Given that humans are very sensitive to the brightness of an image, we measure these metrics in the Y-channel of the image. Experimental results show that the proposed model obtains higher PSNR and SSIM values and recovers more detailed information and more accurate textures at magnification factors of ×2, ×3 and ×4. The proposed method improves 0.13-0.25dB over SwinIR and 0.07-0.21dB over ELAN on Urban100 dataset and 0.07-0.21dB over SwinIR and 0.06-0.19dB over ELAN on Manga109 dataset. We use localized attribution map (LAM) localised attribution map to further explore the model performance, and the experimental results found that the proposed model can utilise a wider range of pixel information, and the proposed model exhibits a higher diffusion index (DI) compared to SwinIR, which proves the effectiveness of the proposed model from the interpretability point of view.
Conclusion
2
The proposed cross-scale Transformer image super-resolution reconstruction model with multilevel fusion channel attention reduces noise and redundant information in the image by fusing convolutional features with Transformer features and utilising a strengthened channel attention mechanism to reduce the likelihood of the model generating image blurring and distortion, and the image super-resolution performance is effectively improved, and in a number of public experimental datasets the test results verify the effectiveness of the multi-tip model. Visually, the model is able to obtain a reconstructed image that is sharper and closer to the real image with fewer artefacts.
图像超分辨率跨尺度Transformer通道注意力机制特征融合深度学习
image super-resolutioncross-scale transformerchannel attention mechanismfeature fusiondeep learning
Agustsson E, and Timofte R. 2017. Dataset and Study//2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Honolulu, HI, USA: IEEE: 1122-1131[DOI:10.1109/CVPRW.2017.150http://dx.doi.org/10.1109/CVPRW.2017.150]
Al-hayani M, and Janabi S, 2023. Medical Magnetic Resonance Imagery Super-Resolution via Residual Dense Network[EB/OL].[2023-11-14]. https://www.researchgate.net/publication/369066884https://www.researchgate.net/publication/369066884
Bevilacqua M, Roumy A, Guillemot C, and Morel M A, 2012. Low-Complexity Single-Image Super-Resolution based on Nonnegative Neighbor Embedding//Procedings of the British Machine Vision Conference 2012. Surrey: British Machine Vision Association: 135.1-135.10[DOI:10.5244/C.26.135]
Cao H, Wang Y, Chen J, Jiang D, Zhang X, Tian Q, and Wang M, 2021. Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation[EB/OL]. [2023-11-15]. http://arxiv.org/abs/2105.05537http://arxiv.org/abs/2105.05537
Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, and Zagoruyko S, 2020. End-to-End Object Detection with Transformers//Computer Vision – ECCV 2020: Cham: Springer International Publishing: 213-229[DOI:10.1007/978-3-030-58452-8_13http://dx.doi.org/10.1007/978-3-030-58452-8_13]
Chan K C K, Zhou S, Xu X, and Loy C C, 2022. BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, LA, USA: IEEE: 5962-5971[DOI:10.1109/CVPR52688.2022.00588http://dx.doi.org/10.1109/CVPR52688.2022.00588]
Chu X, Tian Z, Wang Y, Zhang B, Ren H, Wei X, Xia H, and ShenC, 2021. Twins: Revisiting the Design of Spatial Attention in Vision Transformers[EB/OL].[2023-11-14]. http://arxiv.org/abs/2104.13840http://arxiv.org/abs/2104.13840.
Dai T, Cai J, Zhang Y, Xia S T, and Zhang L, 2019. Second-Order Attention Network for Single Image Super-Resolution//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA: IEEE: 11057-11066[DOI:10.1109/CVPR.2019.01132http://dx.doi.org/10.1109/CVPR.2019.01132]
Dong C, Loy C C, He K, and Tang X, 2014. Learning a Deep Convolutional Network for Image Super-Resolution// Computer Vision – ECCV 2014. Cham: Springer International Publishing: 184-199[DOI:10.1007/978-3-319-10593-2_13http://dx.doi.org/10.1007/978-3-319-10593-2_13]
Dong X Y, Sun X, Jia Z H, Gao L R, and Zhang B. Remote Sensing Image Super-Resolution Using Novel Dense-Sampling Networks[EB/OL] IEEE Journals & Magazine. IEEE Xplore. [2023-11-15]. https://ieeexplore.ieee.org/document/9107103https://ieeexplore.ieee.org/document/9107103.
Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X H, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J and Houlsby N. 2021. An image is worth 16x16 words: Transformers for image recognition at scale[EB/OL].[2023-11-15]. https://doi.org/10.48550/arXiv.2010.11929https://doi.org/10.48550/arXiv.2010.11929.
Gao X, Lu W, Tao D, and Li X, 2009. Image quality assessment based on multiscale geometric analysis. IEEE Transactions on Image Processing, 18(7): 1409-1423[DOI:10.1109/TIP.2009.2018014http://dx.doi.org/10.1109/TIP.2009.2018014]
Gu J, and Dong C, 2021. Interpreting Super-Resolution Networks with Local Attribution Maps//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, TN, USA: IEEE: 9195-9204[DOI:10.1109/CVPR46437.2021.00908http://dx.doi.org/10.1109/CVPR46437.2021.00908]
Huang J B, Singh A, and Ahuja N, 2015. Single image super-resolution from transformed self-exemplars//2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, MA, USA: IEEE: 5197-5206[DOI:10.1109/CVPR.2015.7299156http://dx.doi.org/10.1109/CVPR.2015.7299156]
Huang M, Liu Y, Peng Z, Liu C, Lin D, Zhu S, Yuan N, Ding K, and Jin L, 2022. SwinTextSpotter: Scene Text Spotting via Better Synergy between Text Detection and Text Recognition//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, LA, USA: IEEE: 4583-4593[DOI:10.1109/CVPR52688.2022.00455http://dx.doi.org/10.1109/CVPR52688.2022.00455]
Isobe T, Jia X, Tao X, Li C, Li R, Shi Y, Mu J, Lu H, and Tai Y W, 2022. Look Back and Forth: Video Super-Resolution with Explicit Temporal Difference Modeling//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, LA, USA: IEEE: 17390-17399[DOI:10.1109/CVPR52688.2022.01689http://dx.doi.org/10.1109/CVPR52688.2022.01689]
Jo Y, Oh S W, Kang J, and Kim S J, 2018. Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT: IEEE: 3224-3232[DOI:10.1109/CVPR.2018.00340http://dx.doi.org/10.1109/CVPR.2018.00340]
Kim J, Lee J K, and Lee K M, 2016. Accurate Image Super-Resolution Using Very Deep Convolutional Networks//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE: 1646-1654[DOI:10.1109/CVPR.2016.182http://dx.doi.org/10.1109/CVPR.2016.182]
Li A, Zhang L, Liu Y, and Zhu C, 2023. Feature Modulation Transformer: Cross-Refinement of Global Representation via High-Frequency Prior for Image Super-Resolution//IEEE/CVF International Conference on Computer Vision, ICCV 2023, Paris, France. IEEE: 12480-12490. [DOI:10.1109/ICCV51070.2023.01150http://dx.doi.org/10.1109/ICCV51070.2023.01150]
Liang J, Cao J, Sun G, Zhang K, Van Gool L, and Timofte R, 2021. SwinIR: Image Restoration Using Swin Transformer//2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). Montreal, BC, Canada: IEEE: 1833-1844[DOI:10.1109/ICCVW54120.2021.00210http://dx.doi.org/10.1109/ICCVW54120.2021.00210]
Lim B, Son S, Kim H, Nah S, and Lee K M, 2017. Enhanced Deep Residual Networks for Single Image Super-Resolution//2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Honolulu, HI, USA: IEEE: 1132-1140[DOI:10.1109/CVPRW.2017.151http://dx.doi.org/10.1109/CVPRW.2017.151]
Liu L, Ouyang W, Wang X, Fieguth P, Chen J, Liu X, and PietikÄinen M, 2019. Deep Learning for Generic Object Detection: A Survey[EB/OL]. [2023-11-14]. http://arxiv.org/abs/1809.02165http://arxiv.org/abs/1809.02165.
Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, and Guo B, 2021. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows//2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, QC, Canada: IEEE: 9992-10002[DOI:10.1109/ICCV48922.2021.00986http://dx.doi.org/10.1109/ICCV48922.2021.00986]
Loshchilov I, and Hutter F, 2019. Decoupled Weight Decay Regularization[EB/OL]. [2023-11-17]. http://arxiv.org/abs/1711.05101http://arxiv.org/abs/1711.05101.
Martin D, Fowlkes C, Tal D, and Malik J, 2001. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics//Proceedings Eighth IEEE International Conference on Computer Vision. Vancouver, BC, Canada: IEEE Comput. Soc: 416-423[DOI:10.1109/ICCV.2001.937655http://dx.doi.org/10.1109/ICCV.2001.937655]
Matsui Y, Ito K, Yuji A, Yamasaki T, and Aizawa K, 2017. Sketch-based manga retrieval using manga109 dataset. Multimedia Tools and Applications, 76(20): 21811-21838[DOI:10.1007/s11042-016-4020-zhttp://dx.doi.org/10.1007/s11042-016-4020-z]
Mei Y, Fan Y, and Zhou Y, 2021. Image Super-Resolution with Non-Local Sparse Attention//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 3516-3525[DOI:10.1109/CVPR46437.2021.00352http://dx.doi.org/10.1109/CVPR46437.2021.00352]
Mei Y, Fan Y, Zhou Y, Huang L, Huang T S, and Shi H, 2020. Image Super-Resolution With Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA, USA: IEEE: 5689-5698[DOI:10.1109/CVPR42600.2020.00573http://dx.doi.org/10.1109/CVPR42600.2020.00573]
Niu B, Wen W, Ren W, Zhang X, Yang L, Wang S, Zhang K, Cao X, and Shen H, 2020. Single Image Super-Resolution via a Holistic Attention Network// Computer Vision – ECCV 2020.Cham: Springer International Publishing: 191-207[DOI:10.1007/978-3-030-58610-2_12http://dx.doi.org/10.1007/978-3-030-58610-2_12]
Wang W, Xie E, Li X, Fan D P, Song K, Liang D, Lu T, Luo P, and Shao L, 2021. Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions//2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, QC, Canada: IEEE: 548-558[DOI:10.1109/ICCV48922.2021.00061http://dx.doi.org/10.1109/ICCV48922.2021.00061]
Wang X, Chan K C K, Yu K, Dong C, and Loy C C, 2019. EDVR: Video Restoration With Enhanced Deformable Convolutional Networks//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Long Beach, CA, USA: IEEE: 1954-1963[DOI:10.1109/CVPRW.2019.00247http://dx.doi.org/10.1109/CVPRW.2019.00247]
Wang Z, Bovik A C, Sheikh H R, and Simoncelli E P, 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4): 600-612[DOI:10.1109/TIP.2003.819861http://dx.doi.org/10.1109/TIP.2003.819861]
Woo S, Park J, Lee J Y, and Kweon I S, 2018. CBAM: Convolutional Block Attention Module// Computer Vision – ECCV 2018. Cham: Springer International Publishing: 3-19[DOI:10.1007/978-3-030-01234-2_1http://dx.doi.org/10.1007/978-3-030-01234-2_1]
Wu B, Xu C, Dai X, Wan A, Zhang P, Yan Z, Tomizuka M, Gonzalez J, Keutzer K, and Vajda P, 2020. Visual Transformers: Token-based Image Representation and Processing for Computer Vision[EB/OL]. [2023-11-14]. https://arxiv.org/abs/2006.03677v4https://arxiv.org/abs/2006.03677v4.
Xiong W, Xiong C Y, Gao Z R, Chen W Q, Zheng R H and Tian J W. 2023. Image super-resolution with channel-attention-embedded Transformer. Journal of Image and Graphics,28(12):3744-3757
熊巍,熊承义,高志荣,陈文旗,郑瑞华,田金文 . 2023. 通道注意力嵌入的Transformer图像超分辨率重构. 中国图象图形学报,28(12):3744-3757[DOI:10. 11834/jig. 221033http://dx.doi.org/10.11834/jig.221033]
Zeyde R, Elad M, and Protter M, 2012. On Single Image Scale-Up Using Sparse-Representations//BOISSONNAT J D, CHENIN P, COHEN A, GOUT C, LYCHE T, MAZURE M L, SCHUMAKER L. Curves and Surfaces. Berlin, Heidelberg: Springer: 711-730[DOI:10.1007/978-3-642-27413-8_47http://dx.doi.org/10.1007/978-3-642-27413-8_47]
Zhang H, Zu K, Lu J, Zou Y, and Meng D, 2021. EPSANet: An Efficient Pyramid Squeeze Attention Block on Convolutional Neural Network[EB/OL]. [2023-11-14]. http://arxiv.org/abs/2105.14447http://arxiv.org/abs/2105.14447.
Zhang N, Wang Y, Zhang X, Xu D, Wang X, Ben G, Zhao Z, and Li Z, 2022. A multi-degradation aided method for unsupervised remote sensing image super resolution with convolution neural networks. IEEE Transactions on Geoscience and Remote Sensing, 60: 1-14[DOI:10.1109/TGRS.2020.3042460http://dx.doi.org/10.1109/TGRS.2020.3042460]
Zhang X, Zeng H, Guo S, and Zhang L, 2022. Efficient Long-Range Attention Network for Image Super-resolution[EB/OL]. [2023-11-14]. http://arxiv.org/abs/2203.06697http://arxiv.org/abs/2203.06697.
Zhou S, Zhang J, Zuo W, and Loy C C, 2020. Cross-Scale Internal Graph Neural Network for Image Super-Resolution//Advances in Neural Information Processing Systems. Curran Associates, Inc: 3499-3509[DOI:10.48550http://dx.doi.org/10.48550]
Zhou Y, Li Z, Gou C L, Bai S, Cheng M M, and Hou Q, 2023. SRFormer: Permuted Self-Attention for Single Image Super-Resolution//IEEE/CVF International Conference on Computer Vision, ICCV 2023, Paris, France. IEEE: 12734-12745. [DOI:10.1109/ICCV51070.2023.01174].
Zhang Y, Li K, Li K, Wang L, Zhong B and Fu Y, 2018. Image Super-Resolution Using Very Deep Residual Channel Attention Networks//Computer Vision – ECCV 2018. Cham: Springer International Publishing: 294-310[DOI:10.1007/978-3-030-01234-2_18http://dx.doi.org/10.1007/978-3-030-01234-2_18]
相关作者
相关机构