多尺度信息交互与融合的乳腺病理图像分类
Classification of breast pathological images based on multiscale information interaction and fusion
- 2024年29卷第4期 页码:1085-1099
纸质出版日期: 2024-04-16
DOI: 10.11834/jig.221178
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2024-04-16 ,
移动端阅览
丁维龙, 朱峰龙, 郑魁, 贾秀鹏. 2024. 多尺度信息交互与融合的乳腺病理图像分类. 中国图象图形学报, 29(04):1085-1099
Ding Weilong, Zhu Fenglong, Zheng Kui, Jia Xiupeng. 2024. Classification of breast pathological images based on multiscale information interaction and fusion. Journal of Image and Graphics, 29(04):1085-1099
目的
2
基于深度学习方法进行乳腺癌识别是一项具有挑战的任务,目前较多研究使用单一倍率下的乳腺组织病理图像作为模型的输入,忽略了乳腺组织病理图像固有的多倍率特点,而少数将不同倍率下的图像作为模型输入的研究,存在特征利用率较低以及不同倍率的图像之间缺乏信息交互等问题。
方法
2
针对上述问题,提出一种基于多尺度和分组注意力机制的卷积神经网络改进策略。该策略主要包括信息交互模块和特征融合模块。前者通过空间注意力加强不同倍率的图像之间的相关性,然后将加权累加的结果反馈给原始分支进行动态选择实现特征流通;后者则利用一种分组注意力来提升特征的利用率,同时基于特征金字塔来消除图像之间的感受野差异。
结果
2
本文将上述策略应用到多种卷积网络中,并与最新的方法进行比较。在Camelyon16公开数据集上进行五折交叉验证实验,并对每一项评价指标计算均值和标准差。相比于单一尺度图像作为输入的卷积网络,本文改进的方法在准确率上提升0.9%~1.1%,F1分数提升1.1%~1.2%;相较于对比方法中性能最好的TransPath网络,本文改进的DenseNet201(dense convolutional network)在准确率上提升0.6%,精确率提升0.8%,F1分数提升0.6%,并且各项指标的标准差低于Transpath,表明加入策略的网络具有更好的稳定性。
结论
2
本文所提出的策略能弥补一般多尺度网络的缺陷,并具备一定的通用性,可获得更好的乳腺癌分类性能。
Objective
2
Breast cancer recognition based on deep learning methods is a challenging task due to the large size of breast histopathology images (single image size is approximately 1 GB). Thus, these images must be cut and then identified due to the current computational power limitations. Current research on breast cancer recognition focuses on single-scale networks, ignoring the characteristics of multiple magnifications and pyramidal structure storage of breast histopathology images. Several studies on multiscale networks only input images of different magnifications into the network model and concatenate or aggregate various features after multilayer convolutional layer operations. The feature fusion is simple and ignores the correlation between images of different scales as well as the guidance between images of different scales when extracting their texture features in the shallow part of the network model. Therefore, problems such as low feature utilization and lack of information interaction exist between images of different magnifications.
Method
2
This paper proposes a convolutional neural network improvement strategy based on multiscale and group attention mechanisms to address the above problems. The strategy mainly includes the following two modules: information interaction and feature fusion modules. The first module extracts clear cell morphological structure and global context information from high- and low-magnification images, respectively, through a spatial attention mechanism. The feature information with high relevance to the classification target of the main branch will be given additional weight. Finally, these features are weighted and accumulated, and the results are fed back to the original branch for dynamic selection to achieve feature interaction and circulation. The second module considers that the number of channels on the feature map will multiply as the depth of the network increases, and the general channel attention encounters problems of large computation and low feature activation rate. Therefore, this paper proposes group attention based on group convolution and combines it into the feature fusion module. In addition, a difference in the receptive field of the images is observed at different magnifications (i.e., the actual length of each pixel is different). Thus, this paper uses a feature pyramid to eliminate the perceptual domain difference in the feature fusion process.
Result
2
In this paper, the above strategy is applied to a variety of convolutional neural networks and compared with the latest methods. A fivefold cross-validation experiment is conducted on the Camelyon16 public dataset, and the mean and standard deviation are calculated for each evaluation metric. Compared with the single-scale convolutional network, the introduced method in this paper demonstrated 0.9%—1.1% improvement in accuracy and 1.1%—1.2% in F1-score. Compared with the best-performing TransPath network in the single-scale network, the enhanced DenseNet201 in this paper demonstrated a 0.6% improvement in accuracy, 0.8% in precision, 0.6% in F1-score, and the standard deviation of the indicators is lower than that of TransPath, indicating that the network incorporating the strategy has a better stability.
Conclusion
2
Overall, the proposed strategy in this paper can compensate for the shortcomings of general multiscale networks and has certain generality to obtain superior performance in breast cancer image classification. Thus, this strategy is useful for future multiscale research and feature extraction for downstream tasks.
乳腺病理图像分类密集卷积网络多尺度注意力特征融合
classification of breast pathological imagesdense convolutional networkmultiscaleattentionfusion of features
Alom M Z, Yakopcic C, Nasrin M S, Taha T M and Asari V K. 2019. Breast cancer classification from histopathological images with inception recurrent residual convolutional neural network. Journal of Digital Imaging, 32(4): 605-617 [DOI: 10.1007/s10278-019-00182-7http://dx.doi.org/10.1007/s10278-019-00182-7]
Bejnordi B E, Veta M, van Diest P J, van Ginneken B, Karssemeijer N, van der Laak J A W M and the CAMELYON16 Consortium. 2017. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA, 318(22): 2199-2210 [DOI: 10.1001/jama.2017.14585http://dx.doi.org/10.1001/jama.2017.14585]
Bian X W and Ping Y F. 2019. Pathology in China: challenges and opportunities. Journal of Third Military Medical University, 41(19): 1815-1817
卞修武, 平轶芳. 2019. 我国病理学科发展面临的挑战和机遇. 第三军医大学学报, 41(19): 1815-1817 [DOI: 10.16016/j.1000-5404.201909212http://dx.doi.org/10.16016/j.1000-5404.201909212]
Campanella G, Hanna M G, Geneslaw L, Miraflor A, Silva V W K, Busam K J, Brogi E, Reuter V E, Klimstra D S and Fuchs T J. 2019. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nature Medicine, 25(8): 1301-1309 [DOI: 10.1038/s41591-019-0508-1http://dx.doi.org/10.1038/s41591-019-0508-1]
Chen H Y, Li C, Wang G, Li X Y, Rahaman M M, Sun H Z, Hu W M, Li Y X, Liu W L, Sun C H, Ai S L and Grzegorzek M. 2022. GasHis-Transformer: a multi-scale visual Transformer approach for gastric histopathological image detection. Pattern Recognition, 130: #108827 [DOI: 10.1016/j.patcog.2022.108827http://dx.doi.org/10.1016/j.patcog.2022.108827]
Chhipa P C, Upadhyay R, Pihlgren G G, Saini R, Uchida S and Liwicki M. 2023. Magnification prior: a self-supervised method for learning representations on breast cancer histopathological images//Proceedings of 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Waikoloa, USA: IEEE: 2716-2726 [DOI: 10.1109/WACV56688.2023.00274http://dx.doi.org/10.1109/WACV56688.2023.00274]
Ciga O, Xu T, Nofech-Mozes S, Noy S, Lu F I and Martel A L. 2021. Overcoming the limitations of patch-based learning to detect cancer in whole slide images. Scientific Reports, 11(1): #8894 [DOI: 10.1038/s41598-021-88494-zhttp://dx.doi.org/10.1038/s41598-021-88494-z]
Deng J, Dong W, Socher R, Li L J, Li K and Li F F. 2009. ImageNet: a large-scale hierarchical image database//Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, USA: IEEE: 248-255 [DOI: 10.1109/CVPR.2009.5206848http://dx.doi.org/10.1109/CVPR.2009.5206848]
Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X H, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J and Houlsby N. 2020. An image is worth 16 × 16 words: Transformers for image recognition at scale [EB/OL]. [2020-10-22]. https://arxiv.org/pdf/2010.11929.pdfhttps://arxiv.org/pdf/2010.11929.pdf
Ed-daoudy A and Maalmi K. 2020. Breast cancer classification with reduced feature set using association rules and support vector machine. Network Modeling Analysis in Health Informatics and Bioinformatics, 9(1): #34 [DOI: 10.1007/s13721-020-00237-8http://dx.doi.org/10.1007/s13721-020-00237-8]
Feng S L, Zhao H M, Shi F, Cheng X N, Wang M, Ma Y H, Xiang D H, Zhu W F and Chen X J. 2020. CPFNet: context pyramid fusion network for medical image segmentation. IEEE Transactions on Medical Imaging, 39(10): 3008-3018 [DOI: 10.1109/TMI.2020.2983721http://dx.doi.org/10.1109/TMI.2020.2983721]
Gao Z Y, Hong B Y, Li Y, Zhang X L, Wu J L, Wang C B, Zhang X R, Gong T L, Zheng Y F, Meng D Y and Li C. 2023. A semi-supervised multi-task learning framework for cancer classification with weak annotation in whole-slide images. Medical Image Analysis, 83: #102652 [DOI: 10.1016/j.media.2022.102652http://dx.doi.org/10.1016/j.media.2022.102652]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 770-778 [DOI: 10.1109/CVPR.2016.90http://dx.doi.org/10.1109/CVPR.2016.90]
Hossin M and Sulaiman M N. 2015. A review on evaluation metrics for data classification evaluations. International Journal of Data Mining and Knowledge Management Process, 5(2): 1-11 [DOI: 10.5121/ijdkp.2015.5201http://dx.doi.org/10.5121/ijdkp.2015.5201]
Huang G, Liu Z, Van Der Maaten L and Weinberger K Q. 2017. Densely connected convolutional networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE: 2261-2269 [DOI: 10.1109/CVPR.2017.243http://dx.doi.org/10.1109/CVPR.2017.243]
Jin X, Wen K, Lyu G F, Shi J, Chi M X, Wu Z and An H. 2020. Survey on the applications of deep learning to histopathology. Journal of Image and Graphics, 25(10): 1982-1993
金旭, 文可, 吕国锋, 石军, 迟孟贤, 武铮, 安虹. 2020. 深度学习在组织病理学中的应用综述. 中国图象图形学报, 25(10): 1982-1993 [DOI: 10.11834/jig.200460http://dx.doi.org/10.11834/jig.200460]
Kang D U and Chun S Y. 2022. Multi-scale curriculum learning for efficient automatic whole slide image segmentation//Proceedings of 2022 IEEE International Conference on Big Data and Smart Computing (BigComp). Daegu, Korea(South): IEEE: 366-367 [DOI: 10.1109/BigComp54360.2022.00081http://dx.doi.org/10.1109/BigComp54360.2022.00081]
Litjens G, Kooi T, Bejnordi B E, Setio A A A, Ciompi F, Ghafoorian M, van der laak J A W M, van Ginneken B and Snchez C I. 2017. A survey on deep learning in medical image analysis. Medical Image Analysis, 42: 60-88 [DOI: 10.1016/j.media.2017.07.005http://dx.doi.org/10.1016/j.media.2017.07.005]
Liu Z, Mao H Z, Wu C Y, Feichtenhofer C, Darrell T and Xie S N. 2022. A ConvNet for the 2020s//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE: 11966-11976 [DOI: 10.1109/CVPR52688.2022.01167http://dx.doi.org/10.1109/CVPR52688.2022.01167]
Loshchilov I and Hutter F. 2016. SGDR: stochastic gradient descent with warm restarts [EB/OL]. [2022-12-01]. https://arxiv.org/pdf/1608.03983.pdfhttps://arxiv.org/pdf/1608.03983.pdf
Lu M Y, Williamson D F K, Chen T Y, Chen R J, Barbieri M and Mahmood F. 2021. Data-efficient and weakly supervised computational pathology on whole-slide images. Nature Biomedical Engineering, 5(6): 555-570 [DOI: 10.1038/s41551-020-00682-whttp://dx.doi.org/10.1038/s41551-020-00682-w]
Man R, Yang P, Ji C Y and Xu B W. 2020. Survey of classification methods of breast cancer histopathological images. Computer Science, 47(11A): 145-150
满芮, 杨萍, 季程雨, 许博文. 2020.乳腺癌组织病理学图像分类方法研究综述. 计算机科学, 47(11A): 145-150 [DOI: 10.11896/jsjkx.191100098http://dx.doi.org/10.11896/jsjkx.191100098]
Otsu N. 1979. A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9(1): 62-66 [DOI: 10.1109/TSMC.1979.4310076http://dx.doi.org/10.1109/TSMC.1979.4310076]
Reinhard E, Adhikhmin M, Gooch B and Shirley P. 2001. Color transfer between images. IEEE Computer Graphics and Applications, 21(5): 34-41 [DOI: 10.1109/38.946629http://dx.doi.org/10.1109/38.946629]
Senousy Z, Abdelsamea M M, Gaber M M, Abdar M, Acharya U R, Khosravi A and Nahavandi S. 2022. MCUa: multi-level context and uncertainty aware dynamic deep ensemble for breast cancer histology image classification. IEEE Transactions on Biomedical Engineering, 69(2): 818-829 [DOI: 10.1109/TBME.2021.3107446http://dx.doi.org/10.1109/TBME.2021.3107446]
Sung H, Ferlay J, Siegel R L, Laversanne M, Soerjomataram I, Jemal A and Bray F. 2021. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: A Cancer Journal for Clinicians, 71(3): 209-249 [DOI: 10.3322/caac.21660http://dx.doi.org/10.3322/caac.21660]
Tan M X and Le Q V. 2021. EfficientNetV2: smaller models and faster training//Proceedings of the 38th International Conference on Machine Learning. [s.l.]: PMLR: 10096-10106
Tong L, Sha Y and Wang M D. 2019. Improving classification of breast cancer by utilizing the image pyramids of whole-slide imaging and multi-scale convolutional neural networks//Proceedings of the 43rd IEEE Annual Computer Software and Applications Conference (COMPSAC). Milwaukee, USA: IEEE: 696-703 [DOI: 10.1109/COMPSAC.2019.00105http://dx.doi.org/10.1109/COMPSAC.2019.00105]
Vaswani A, Ramachandran P, Srinivas A, Parmar N, Hechtman B and Shlens J. 2021. Scaling local self-attention for parameter efficient visual backbones//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 12889-12899 [DOI: 10.1109/CVPR46437.2021.01270http://dx.doi.org/10.1109/CVPR46437.2021.01270]
Vesal S, Ravikumar N, Davari A, Ellmann S and Maier A. 2018. Classification of breast cancer histology images using transfer learning//Proceedings of the 15th International Conference Image Analysis and Recognition. Póvoa de Varzim, Portugal: Springer: 812-819 [DOI: 10.1007/978-3-319-93000-8_92http://dx.doi.org/10.1007/978-3-319-93000-8_92]
Wang D Y, Khosla A, Gargeya R, Irshad H and Beck A H. 2016. Deep learning for identifying metastatic breast cancer [EB/OL]. [2022-12-21]. https://arxiv.org/pdf/1606.05718.pdfhttps://arxiv.org/pdf/1606.05718.pdf
Wang P, Li P F, Li Y M, Xu J and Jiang M F. 2022. Classification of histopathological whole slide images based on multiple weighted semi-supervised domain adaptation. Biomedical Signal Processing and Control, 73: #103400 [DOI: 10.1016/j.bspc.2021.103400http://dx.doi.org/10.1016/j.bspc.2021.103400]
Wang S, Liu J, Bi Y Y, Chen Z, Zheng Q H and Duan H F. 2018. Automatic recognition of breast gland based on two-step clustering and random forest. Computer Science, 45(3): 247-252
王帅, 刘娟, 毕姚姚, 陈哲, 郑群花, 段慧芳. 2018. 基于两步聚类和随机森林的乳腺腺管自动识别方法. 计算机科学, 45(3): 247-252 [DOI: 10.11896/j.issn.1002-137X.2018.03.039http://dx.doi.org/10.11896/j.issn.1002-137X.2018.03.039]
Wang X Y, Yang S, Zhang J, Wang M H, Zhang J, Huang J Z, Yang W and Han X. 2021. TransPath: Transformer-based self-supervised learning for histopathological image classification//Proceedings of the 24th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). Strasbourg, France: Springer: 186-195 [DOI: 10.1007/978-3-030-87237-3_18http://dx.doi.org/10.1007/978-3-030-87237-3_18]
Woo S, Park J, Lee J Y and Kweon I S. 2018. CBAM: convolutional block attention module//Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich, Germany: Springer: 3-19 [DOI: 10.1007/978-3-030-01234-2_1http://dx.doi.org/10.1007/978-3-030-01234-2_1]
Xie P Z, Li T, Li F F, Zuo K, Zhou J and Liu J. 2021. Multi-scale convolutional neural network for melanoma histopathology image classification//Proceedings of the 3rd IEEE International Conference on Frontiers Technology of Information and Computer. Greenville, USA: IEEE: 551-554 [DOI: 10.1109/ICFTIC54370.2021.9647390http://dx.doi.org/10.1109/ICFTIC54370.2021.9647390]
Xu G X, Wang Y, Zhang Y Y, Li C S, Liu C X and Li F. 2021. Application of deep learning in tumor histopathological image analysis. Journal of Clinical and Pathological Research, 41(6): 1454-1462
徐贵璇, 王阳, 张杨杨, 李春森, 刘春霞, 李锋. 2021. 深度学习在肿瘤组织病理图像分析中的应用. 临床与病理杂志, 41(6): 1454-1462 [DOI: 10.3978/j.issn.2095-6959.2021.06.035http://dx.doi.org/10.3978/j.issn.2095-6959.2021.06.035]
Yan R, Chen L M, Li J T and Ren F. 2021. Research progress of cancer classification based on deep learning and histopathological images. Medical Journal of Peking Union Medical College Hospital, 12(5): 742-748
颜锐, 陈丽萌, 李锦涛, 任菲. 2021. 基于深度学习和组织病理图像的癌症分类研究进展. 协和医学杂志, 12(5): 742-748 [DOI: 10.12290/xhyxzz.2021-0452http://dx.doi.org/10.12290/xhyxzz.2021-0452]
Yu B, Chen H C, Zhang Y K, Cong L L, Pang S C, Zhou H R, Wang Z Y and Cong X L. 2023. Data and knowledge co-driving for cancer subtype classification on multi-scale histopathological slides. Knowledge-Based Systems, 260: #110168 [DOI: 10.1016/j.knosys.2022.110168http://dx.doi.org/10.1016/j.knosys.2022.110168]
Zhang Y G, Zhang B L, Coenen F, Xiao J M and Lu W J. 2014. One-class kernel subspace ensemble for medical image classification. EURASIP Journal on Advances in Signal Processing, 2014(1): #17 [DOI: 10.1186/1687-6180-2014-17http://dx.doi.org/10.1186/1687-6180-2014-17]
Zhao X P, Wang R F, Sun Z B and Wei X Q. 2023. Research on eight classifications of breast cancer pathological images based on improved DenseNet. Computer Engineering and Applications, 59(5): 213-221
赵晓平, 王荣发, 孙中波, 魏旭全. 2023. 改进DenseNet的乳腺癌病理图像八分类研究. 计算机工程与应用, 59(5): 213-221 [DOI: 10.3778/j.issn.1002-8331.2111-0081http://dx.doi.org/10.3778/j.issn.1002-8331.2111-0081]
Zhao Y L, Ding W L, You Q H, Zhu F L, Zhu X J, Zheng K and Liu D D. 2023. Classification of whole slide images of breast histopathology based on spatial correlation characteristics. Journal of Image and Graphics, 28(4): 1135-1145
赵樱莉, 丁维龙, 游庆华, 朱峰龙, 朱筱婕, 郑魁, 刘丹丹. 2023. 融合空间相关性特征的乳腺组织病理全切片分类. 中国图象图形学报, 28(4): 1135-1145 [DOI: 10.11834/jig.211133http://dx.doi.org/10.11834/jig.211133]
Zheng J, Lin D N, Gao Z J, Wang S, He M J and Fan J P. 2020. Deep learning assisted efficient adaboost algorithm for breast cancer detection and early diagnosis. IEEE Access, 8: 96946-96954 [DOI: 10.1109/ACCESS.2020.2993536http://dx.doi.org/10.1109/ACCESS.2020.2993536]
相关作者
相关机构