全监督和弱监督图网络的病理图像分割
Fully and weakly supervised graph networks for histopathology image segmentation
- 2024年29卷第3期 页码:697-712
纸质出版日期: 2024-03-16
DOI: 10.11834/jig.230477
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2024-03-16 ,
移动端阅览
沈熠婷, 陈昭, 张清华, 陈锦豪, 王庆国. 2024. 全监督和弱监督图网络的病理图像分割. 中国图象图形学报, 29(03):0697-0712
Shen Yiting, Chen Zhao, Zhang Qinghua, Chen Jinhao, Wang Qingguo. 2024. Fully and weakly supervised graph networks for histopathology image segmentation. Journal of Image and Graphics, 29(03):0697-0712
目的
2
计算机辅助技术以及显微病理图像处理技术给病理诊断带来了极大的便利。病理图像分割是常用的技术手段,可用于划分病灶和背景组织。开发高精度的分割算法,需要大量精准标注的数字病理图像,但是标注过程耗时费力,具有精准标注的病理图像稀少。而且,病理图像非常复杂,对病理组织分割算法的鲁棒性和泛化性要求极高。因此,本文提出一种基于图网络的病理图像分割框架。
方法
2
该框架有全监督图网络(full supervised graph network,FSGNet)和弱监督图网络(weakly supervised graph network,WSGNet)两种模式,以适应不同标注量的数据集以及多种应用场景的精度需求。通过图网络学习病理组织的不规则形态,FSGNet能达到较高的分割精度;WSGNet采用超像素级推理,仅需要稀疏点标注就能分割病理组织。
结果
2
本文在两个公开数据集GlaS(Gland Segmentation Challenge Dataset)(测试集分为A部分和B部分)、CRAG(colorectal adenocarcinoma gland)和一个私有数据集LUSC(lung squamous cell carcinoma)上进行测试。最终,本文所提框架的两种模式在3个数据集中整体像素级分类精度(overall accuracy,OA)和Dice指数(Dice index,DI)均优于对比算法,且FSGNet在GlaS B数据集中效果最明显,分别提升了1.61%和2.26%,WSGNet在CRAG数据集中较先进算法提升效果最明显,分别提升了2.63%和2.54%。
结论
2
本文所提框架的两种模式均优于多种目前先进的算法,表现出较好的泛化性和鲁棒性。
Objective
2
Computer-assisted techniques and histopathology image processing technologies have significantly facilitated pathological diagnoses. Among them, histopathology image segmentation is an integral component of histopathology image processing, which generally refers to the separation of target regions (e.g., tumor cells, glands, and cancer nests) from the background, is further used for downstream tasks (e.g., cancer grading and survival prediction). In recent years, the rapid development of deep learning has resulted in significant breakthroughs in histopathology image segmentation. Segmentation networks, such as FCN and U-Net, have demonstrated strong capabilities in accurately delineating edges. However, most existing deep learning methods rely on fully supervised learning mode, which depends on numerous accurately annotated digital histopathology images. Manual annotation, conducted by medical professionals with expertise in histopathology, is time-consuming and also introduces a high likelihood of missed diagnoses and false detections. Consequently, there is a scarcity of histopathology images with precise annotations. Moreover, histopathology images are highly complex, making it extremely challenging to distinguish targets from the background, thereby leading to inter-class homogeneity. Within the same dataset of tissue samples, there are significant variations among pathological objects, exhibiting intra-class heterogeneity. Differences between patients and nonlinear relationships between image features impose high requirements on the robustness and generalization of histopathological tissue segmentation algorithms. Therefore, this study proposes a graph-based framework for histopathology image segmentation.
Method
2
The framework consists of two modes, namely, fully supervised graph network (FSGNet) and weakly supervised graph network (WSGNet), aiming to adapt to datasets with different levels of annotation and precision requirements in various application scenarios. FSGNet is used when working with samples having pixel-level labels and requiring high accuracy. It is trained in a fully supervised manner. Meanwhile, WSGNet is utilized when dealing with samples that only have sparse point labels. It utilizes weakly supervised learning to extract histopathology image information and train the segmentation network. Furthermore, the proposed framework uses graph convolutional networks (GCN) to represent the irregular morphology of histopathological tissues. GCN is capable of handling data with arbitrary structures and learns the nonlinear structure of images by constructing a topological graph based on histopathology images. This approach contributes to improving the accuracy of histopathology image segmentation. The current study introduces graph Laplacian regularization to facilitate the learning of similar features from neighboring nodes, effectively aggregating similar nodes and enhancing the proposed model’s generalization capability. FSGNet consists of a backbone network and GCN. The backbone network follows an encoder-decoder structure to extract deep features from histopathology images. GCN is used to learn the nonlinear structure of histopathological tissues, enhancing the network’s expressive power and generalization ability, ultimately resulting in the segmentation of target regions from the background. WSGNet utilizes simple linear iterative clustering (SLIC) for superpixel segmentation of the original image. This method transforms the weakly supervised semantic segmentation problem into a binary classification problem for superpixels. WSGNet leverage local spatial similarity to reduce the computational complexity of subsequent processing. In the preprocessing stage, the semantic information of point labels can be propagated to the entire superpixel region, thereby generating superpixel labels. WSGNet is capable of accomplishing the segmentation of histopathology images even with a limited number of point annotations.
Result
2
This study paper conducted tests on two public datasets, namely, Gland Segmentation Challenge Dataset (GlaS) and Colorectal Adenocarcinoma Gland (CRAG) dataset, as well as one private dataset called Lung Squamous Cell Carcinoma (LUSC). GlaS consists of 165 images, with a training-to-testing ratio of 85:80. It is stratified based on histological grades and fields of view, and the testing set is further divided into Parts A and B (60 and 20 images, respectively). CRAG comprises 213 images of colorectal adenocarcinoma, with a training-to-testing ratio of 173:40. LUSC contains 110 histopathological images, with a training-to-testing ratio of 70:40. The performance of FSGNet was compared with FCN-8, U-Net, and UNeXt. WSGNet was compared with recently proposed weakly supervised models, such as WESUP, CDWS, and SizeLoss. The two modes of the proposed framework outperformed the comparison algorithms in terms of overall accuracy (OA) and Dice index (DI) on the three datasets. FSGNet achieved an OA of 88.15% and DI of 89.64% on GlaS Part A, OA of 91.58% and DI of 91.23% on GlaS Part B, OA of 93.74% and DI of 92.58% on CRAG, and OA of 92.84% and DI of 93.20% on LUSC. WSGNet achieved an OA of 84.27% and DI of 86.15% on GlaS Part A, OA of 84.91% and DI of 83.60% on GlaS Part B, OA of 85.50% and DI of 80.17% on CRAG, and OA of 88.45% and DI of 87.89% on LUSC. Results indicate that the proposed framework exhibits robustness and generalization capabilities across different datasets because its performance does not vary significantly.
Conclusion
2
The two modes of the proposed framework demonstrate excellent performance in histopathological image segmentation. Subjective segmentation results indicate that the framework is able to achieve more complete segmentation of examples and provide more accurate predictions of the central regions of the target samples. It exhibits fewer instances of missed and false detections, thereby showcasing strong generalization and robustness.
病理图像分割图卷积网络(GCN)全监督学习弱监督学习点标签
histopathology image segmentationgraph convolutional network(GCN)fully supervised learningweakly supervised learningpoint labels
Achanta R, Shaji A, Smith K, Lucchi A, Fua P and Süsstrunk S. 2012. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(11): 2274-2282 [DOI: 10.1109/TPAMI.2012.120http://dx.doi.org/10.1109/TPAMI.2012.120]
Ahn J, Cho S and Kwak S. 2019. Weakly supervised learning of instance segmentation with inter-pixel relations//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE: 2204-2213 [DOI: 10.1109/cvpr.2019.00231http://dx.doi.org/10.1109/cvpr.2019.00231]
Anklin V, Pati P, Jaume G, Bozorgtabar B, Foncubierta-Rodriguez A, Thiran J P, Sibony M, Gabrani M and Goksel O. 2021. Learning whole-slide segmentation from inexact and incomplete labels using tissue graphs//Proceedings of the 24th International Conference on Medical Image Computing and Computer-Assisted Intervention. Strasbourg, France: Springer: 636-646 [DOI: 10.1007/978-3-030-87196-3_59http://dx.doi.org/10.1007/978-3-030-87196-3_59]
Anwar S M, Majid M, Qayyum A, Awais M, Alnowami M and Khan M K. 2018. Medical image analysis using convolutional neural networks: a review. Journal of Medical Systems, 42(11): #226 [DOI: 10.1007/s10916-018-1088-1http://dx.doi.org/10.1007/s10916-018-1088-1]
Bai L, Cui L X, Jiao Y H, Rossi L and Hancock E R. 2022. Learning backtrackless aligned-spatial graph convolutional networks for graph classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(2): 783-798. [DOI: 10.1109/TPAMI.2020.3011866http://dx.doi.org/10.1109/TPAMI.2020.3011866]
Battaglia P W, Hamrick J B, Bapst V, Sanchez-Gonzalez A, Zambaldi V, Malinowski M, Tacchetti A, Raposo D, Santoro A, Faulkner R, Gulcehre C, Song F, Ballard A, Gilmer J, Dahl G, Vaswani A, Allen K, Nash C, Langston V, Dyer C, Heess N, Wierstra D, Kohli P, Botvinick M, Vinyals O, Li Y J and Pascanu R. 2018. Relational inductive biases, deep learning, and graph networks [EB/OL]. [2023-07-17]. https://arxiv.org/pdf/1806.01261.pdfhttps://arxiv.org/pdf/1806.01261.pdf
Bearman A, Russakovsky O, Ferrari V and Li F F. 2016. What’s the point: semantic segmentation with point supervision//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, the Netherlands: Springer: 549-565 [DOI: 10.1007/978-3-319-46478-7_34http://dx.doi.org/10.1007/978-3-319-46478-7_34]
Bilodeau A, Delmas C V L, Parent M, De Koninck P, Durand A and Lavoie-Cardinal F. 2022. Microscopy analysis neural network to solve detection, enumeration and segmentation from image-level annotations. Nature Machine Intelligence, 4(5): 455-466 [DOI: 10.1038/s42256-022-00472-whttp://dx.doi.org/10.1038/s42256-022-00472-w]
Chen H, Qi X J, Yu L Q and Heng P A. 2016. DCAN: deep contour-aware networks for accurate gland segmentation//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 2487-2496 [DOI: 10.1109/cvpr.2016.273http://dx.doi.org/10.1109/cvpr.2016.273]
Chen Z, Chen Z, Liu J X, Zheng Q, Zhu Y A, Zuo Y F, Wang Z Y, Guan X S, Wang Y and Li Y. 2021. Weakly supervised histopathology image segmentation with sparse point annotations. IEEE Journal of Biomedical and Health Informatics, 25(5): 1673-1685 [DOI: 10.1109/jbhi.2020.3024262http://dx.doi.org/10.1109/jbhi.2020.3024262]
Graham S, Chen H, Gamper J, Dou Q, Heng P A, Snead D, Tsang Y W and Rajpoot N. 2019. MILD-Net: minimal information loss dilated network for gland instance segmentation in colon histology images. Medical Image Analysis, 52: 199-211 [DOI: 10.1016/j.media.2018.12.001http://dx.doi.org/10.1016/j.media.2018.12.001]
Gunesli G N, Sokmensuer C and Gunduz-Demir C. 2020. AttentionBoost: learning what to attend for gland segmentation in histopathological images by boosting fully convolutional networks. IEEE Transactions on Medical Imaging, 39(12): 4262-4273 [DOI: 10.1109/tmi.2020.3015198http://dx.doi.org/10.1109/tmi.2020.3015198]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 770-778 [DOI: 10.1109/cvpr.2016.90http://dx.doi.org/10.1109/cvpr.2016.90]
Jia Z P, Huang X Y, Chang E I C and Xu Y. 2017. Constrained deep weak supervision for histopathology image segmentation. IEEE transactions on medical imaging, 36(11): 2376-2388 [DOI: 10.1109/tmi.2017.2724070http://dx.doi.org/10.1109/tmi.2017.2724070]
Kervadec H, Dolz J, Tang M, Granger E, Boykov Y and Ben Ayed I. 2019. Constrained-CNN losses for weakly supervised segmentation. Medical Image Analysis, 54: 88-99 [DOI: 10.1016/j.media.2019.02.009http://dx.doi.org/10.1016/j.media.2019.02.009]
Kipf T N and Welling M. 2017. Semi-supervised classification with graph convolutional networks//Proceedings of the 5th International Conference on Learning Representations. Toulon, France: OpenReview.net
Li Q L, He X F, Wang Y T, Liu H Y, Xu D R and Guo F M. 2013. Review of spectral imaging technology in biomedical engineering: achievements and challenges. Journal of Biomedical Optics, 18(10): #100901 [DOI: 10.1117/1.JBO.18.10.100901http://dx.doi.org/10.1117/1.JBO.18.10.100901]
Long J, Shelhamer E and Darrell T. 2015. Fully convolutional networks for semantic segmentation//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE: 3431-3440 [DOI: 10.1109/cvpr.2015.7298965http://dx.doi.org/10.1109/cvpr.2015.7298965]
Lu Y, Chen Y R, Zhao D B and Chen J X. 2019. Graph-FCN for image semantic segmentation//Proceedings of the 16th International Symposium on Neural Networks. Moscow, Russia: Springer: 97-105 [DOI: 10.1007/978-3-030-22796-8_11http://dx.doi.org/10.1007/978-3-030-22796-8_11]
Qu H, Wu P X, Huang Q Y, Yi J R, Yan Z N, Li K, Riedlinger G M, De S, Zhang S T and Metaxas D N. 2020. Weakly supervised deep nuclei segmentation using partial points annotation in histopathology images. IEEE Transactions on Medical Imaging, 39(11): 3655-3666 [DOI: 10.1109/tmi.2020.3002244http://dx.doi.org/10.1109/tmi.2020.3002244]
Richie R, Aka A and Bhatia S. 2023. Free association in a neural network. Psychological Review, 130(5): 1360-1382 [DOI: 10.1037/rev0000396http://dx.doi.org/10.1037/rev0000396]
Ronneberger O, Fischer P and Brox T. 2015. U-Net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer: 234-241 [DOI: 10.1007/978-3-319-24574-4_28http://dx.doi.org/10.1007/978-3-319-24574-4_28]
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z H, Karpathy A, Khosla A, Bernstein M, Berg A and Li F F. 2015. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3): 211-252 [DOI: 10.1007/s11263-015-0816-yhttp://dx.doi.org/10.1007/s11263-015-0816-y]
Samanta P and Singhal N. 2022. YAMU: yet another modified U-Net architecture for semantic segmentation//Proceedings of 2022 International Conference on Medical Imaging with Deep Learning. Zürich, Switzerland: PMLR: 1019-1033.
Simonyan K and Zisserman A. 2015. Very deep convolutional networks for large-scale image recognition//Proceedings of the 3rd International Conference on Learning Representations. San Diego, USA: ICLR
Sirinukunwattana K, Pluim J P W, Chen H, Qi X J, Heng P A, Guo Y B, Wang L Y, Matuszewski B J, Bruni E, Sanchez U, Böhm A, Ronneberger O, Cheikh B B, Racoceanu D, Kainz P, Pfeiffer M, Urschler M, Snead D R J and Rajpoot N M. 2017. Gland segmentation in colon histology images: the glas challenge contest. Medical Image Analysis, 35: 489-502 [DOI: 10.1016/j.media.2016.08.008http://dx.doi.org/10.1016/j.media.2016.08.008]
Smola A J and Kondor R. 2003. Kernels and regularization on graphs//Proceedings of the 16th Learning Theory and Kernel Machines. Washington, USA: Springer: 144-158 [DOI: 10.1007/978-3-540-45167-9_12http://dx.doi.org/10.1007/978-3-540-45167-9_12]
Sun L, Zhou M, Li Q L, Hu M H, Wen Y, Zhang J, Lu Y and Chu J H. 2022. Diagnosis of cholangiocarcinoma from microscopic hyperspectral pathological dataset by deep convolution neural networks. Methods, 202, 22-30 [DOI: 10.1016/j.ymeth.2021.04.005http://dx.doi.org/10.1016/j.ymeth.2021.04.005]
Tang C S, Hu C C, Sun J D and Sima H F. 2021. Deep learning-based medical images analysis evolved from convolution to graph convolution. Journal of Image and Graphics, 26(9): 2078-2093
唐朝生, 胡超超, 孙君顶, 司马海峰. 2021. 医学图像深度学习技术: 从卷积到图卷积的发展. 中国图象图形学报, 26(9): 2078-2093 [DOI: 10.11834/jig.200666http://dx.doi.org/10.11834/jig.200666]
Valanarasu J M J and Patel V M. 2022. UNeXt: MLP-based rapid medical image segmentation network//Proceedings of the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention. Singapore, Singapore: Springer: 23-33 [DOI: 10.1007/978-3-031-16443-9_3http://dx.doi.org/10.1007/978-3-031-16443-9_3]
Yang L, Zhang Y Z, Zhao Z, Zheng H, Liang P X, Ying M T C, Ahuja A T and Chen D Z. 2019. Boxnet: deep learning based biomedical image segmentation using boxes only annotation [EB/OL]. [2018-06-02]. https://arxiv.org/pdf/1806.00593.pdfhttps://arxiv.org/pdf/1806.00593.pdf
Yun B X, Li Q L, Mitrofanova L, Zhou C H and Wang Y. 2023. Factor space and spectrum for medical hyperspectral image segmentation//Proceedings of the 26th International Conference on Medical Image Computing and Computer-Assisted Intervention. Vancouver, Canada: Springer [DOI: 10.1007/978-3-031-43901-8_15http://dx.doi.org/10.1007/978-3-031-43901-8_15]
Zhang J, Hua Z Y, Yan K Z, Tian K, Yao J H, Liu E Y, Liu M X and Han X. 2021. Joint fully convolutional and graph convolutional networks for weakly-supervised segmentation of pathology images. Medical Image Analysis, 73: #102183 [DOI: 10.1016/j.media.2021.102183http://dx.doi.org/10.1016/j.media.2021.102183]
Zhang Q H and Chen Z. 2022. Weakly supervised segmentation by tensor graph learning for whole slide images//Proceedings of the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention. Singapore, Singapore: Springer: 253-262 [DOI: 10.1007/978-3-031-16434-7_25http://dx.doi.org/10.1007/978-3-031-16434-7_25]
Zhao T Y and Yin Z Z. 2021. Weakly supervised cell segmentation by point annotation. IEEE Transactions on Medical Imaging, 40(10): 2736-2747 [DOI: 10.1109/tmi.2020.3046292http://dx.doi.org/10.1109/tmi.2020.3046292]
Zhou Z W, Siddiquee M M R, Tajbakhsh N and Liang J M. 2020. UNet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Transactions on Medical Imaging, 39(6): 1856-1867 [DOI: 10.1109/tmi.2019.2959609http://dx.doi.org/10.1109/tmi.2019.2959609]
Zhu Y and Li X. 2023. A survey of medical image captioning technique: encoding, decoding and latest advance. Journal of Image and Graphics, 28(7): 1990-2010
朱翌, 李秀. 2023. 医学图像描述综述: 编码、解码及最新进展. 中国图象图形学报, 28(7): 1990-2010 [DOI: 10.11834/jig.211021http://dx.doi.org/10.11834/jig.211021]
相关作者
相关机构