单域泛化X-ray乳腺肿瘤检测
Single-domain generalized breast tumor detection in X-ray images
- 2024年29卷第3期 页码:725-740
纸质出版日期: 2024-03-16
DOI: 10.11834/jig.230279
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2024-03-16 ,
移动端阅览
史彩娟, 郑远帆, 任弼娟, 孔凡跃, 段昌钰. 2024. 单域泛化X-ray乳腺肿瘤检测. 中国图象图形学报, 29(03):0725-0740
Shi Caijuan, Zheng Yuanfan, Ren Bijuan, Kong Fanyue, Duan Changyu. 2024. Single-domain generalized breast tumor detection in X-ray images. Journal of Image and Graphics, 29(03):0725-0740
目的
2
由于乳腺肿瘤病灶的隐蔽性强且极易转移,目前采用医学辅助诊断(computer-aided diagnosis, CAD)来尽早地发现肿瘤并诊断。然而,医学图像数据量少且标注昂贵,导致全监督场景下的基于深度学习的X-ray乳腺肿瘤检测方法的性能非常有限,且模型泛化能力弱;此外,噪声产生的域偏移(domain shift)也降低了不同环境下肿瘤检测的性能。针对上述挑战,提出一种单域泛化X-ray乳腺肿瘤检测方法。
方法
2
提出了一种单域泛化模型(single-domain generalization model, SDGM)进行X-ray乳腺肿瘤检测,采用ResNet-50(residual network-50)作为主干特征提取网络,设计了域特征增强模块(domain feature enhancement module, DFEM)来有效融合上采样与下采样中的全局信息以抑制噪声,然后在检测头处设计了实例泛化模块(instance generalization module,IGM),对每个实例的类别语义信息进行正则化与白化处理来提升模型的泛化性能,通过学习少量的有标注医学图像对不可预见的噪声图像进行迁移学习,缓解因有标记医学图像匮乏而导致的泛化能力弱的问题;同时避免模型的冗余训练,进一步增强模型在不同环境下的鲁棒性。
结果
2
为了验证所提模型SDGM的域内泛化性能,将INbreast的单域X-ray图像作为训练集,多种域偏移的图像为测试集,实验结果表明在域内泛化场景下SDGM性能优于FCOS(fully convolutional one-stage object detection)、Cascade-RCNN、FoveaBox、ATSS、TOOD(task-aligned one-stage object detection)、PVTv2-Transformer等方法,泛化性能比baseline方法的mAP(mean average precision)提升了9.7%;在训练数据量更小的前提下,单域泛化性能优于INbreast全监督场景下的baseline方法的性能。此外,为了进一步验证SDGM在不同数据集的域间的泛化性能,将CBIS-DDSM(curated breast imaging subset of DDSM)数据集作为训练集而多种域偏移的INbreast数据集作为测试集进行实验,所提方法SDGM比baseline方法提升了5.8%。
结论
2
所提单域泛化模型SDGM能够有效缓解域偏移对模型性能的影响,并能够针对医学数据域未知且数量少的特点进行泛化,能够较灵活地迁移至临床实践中未知域下的噪声场景。
Objective
2
Breast tumor detection in X-ray images is a great challenge in the domain of medical image analysis, primarily because of the intrinsic difficulty in discerning lesions due to their significant concealment and propensity for metastasis. Currently, computer-aided diagnosis (CAD) plays a pivotal role in early tumor detection and diagnosis. Remarkable progress has been achieved in detecting breast tumors in X-ray images through deep learning-based object detection methods when the training and testing data are of the same modality. However, the limited availability of medical image data and the labor-intensive and professional nature of data annotation have constrained the detection performance and generalization ability of models. In addition, the presence of domain shift in the unseen domains caused by noise impairs the performance of breast tumor detection across diverse environments. To address these issues, existing studies have proposed different methods, including domain adaptation and domain generalization. However, domain adaptation requires a partition between the target and source domains, while domain generalization requires training the models in multiple domains. Achieving domain division poses a formidable challenge due to the limited availability of medical data. Therefore, in response to these challenges, single-domain methods have been proposed to train the models in a single domain and then they are generalized to the unseen domains in recent years. These methods are well-suited for medical data for aiding in mitigating domain shifts. Though single-domain generalization has been widely applied in classification tasks, its application to object detection tasks remains relatively nascent due to the inherent differences between object detection and classification. Through analysis, we found the single instance only focuses on holistic images for domain alignment in the classification tasks. In contrast, object detection tasks entail the simultaneous consideration of multiple objects within each image, which leads to the mismatch of instances. Thus, we propose a novel instance alignment paradigm to facilitate the single-domain generalization for detecting breast tumors.
Method
2
To improve the generalization performance for robust breast tumor detection in X-ray images, we propose a novel model called the single-domain generalization model (SDGM). The SDGM is constructed upon the baseline (RetinaNet) and employs Resnet-50 as its backbone. Two pivotal modules, namely, the instance generalization module (IGM) and the domain feature enhancement module (DFEM), are developed. First, the IGM is strategically positioned at the detection head to enhance the generalization performance by normalizing and whitening the category semantic information of each instance. The IGM comprises
N
sets of 3 × 3 convolutions and the switchable whitening sub-module, which is widely recognized for its effectiveness in extracting instance domain-invariant features in classification tasks. Therefore, IGM is integrated into the classification branch at the detection head. Second, the DFEM is ingeniously devised to efficiently merge the global information from both up-sampling and down-sampling processes while mitigating the impact of noise in medical images. To counteract the noise generated by conventional convolution in spatial features, a 3 × 3 convolution is employed to generate a foreground mask image, which serves as the convolution offset to guide the deformable convolution for sampling. Subsequently, channel-wise attention is leveraged to selectively suppress noise within each channel. The DFEM is incorporated into the feature pyramid network to attenuate the noise during the fusion of feature maps at various scales, thereby promoting subsequent domain-invariant feature extraction.
Result
2
To assess the efficiency of our proposed SDGM, we conduct extensive experiments on the CBIS-DDSM dataset and the INbreast dataset, which is single-domain generalized with multiple domains in the intra-domain. Additionally, we compare the SDGM against several state-of-the-art methods. We also evaluate the inter-domain generalization performance between the CBIS-DDSM and INbreast datasets. In the intra-domain single-domain generalization scenarios, the SDGM consistently outperforms the baseline method (RetinaNet) by a 9.7% increase in mean average precision. Furthermore, it surpasses other one-stage anchor-free methods (e.g., FCOS and FoveaBox), one-stage anchor-based methods (e.g., ATSS and TOOD), two-stage methods (e.g., Faster R-CNN and Cascade-RCNN), and even the transformer-based method PVTv2. In the supervised learning scenarios, the SDGM trained with only 728 images, surpasses RetinaNet, Cascade-RCNN, FoveaBox, and FCOS trained with 5 148 images. This result demonstrates that the SDGM exhibits remarkable generalization capabilities, outperforming supervised methods with substantially less training data. Furthermore, we assess the impact of the attention mechanism on the model performance. Compared with the method TOOD without attention, the SDGM alleviates domain shift to achieve at least a 3.6% improvement in the single-domain generalization scenario. Additionally, compared with PVTv2 and ResNeSt, which employ different attention mechanisms, the SDGM alleviates domain shift to achieve 21.1% and 2.8% improvement respectively, in the single-domain generalization scenarios. In the inter-domain single-domain generalization scenarios, the SDGM displays a performance improvement of 5.8% compared with the baseline. These results indicate that our proposed SDGM not only mitigates performance degradation but also has robustness and generalization capabilities across different datasets.
Conclusion
2
In this study, we develop the SDGM for detecting breast tumors in X-ray images and focus on designing two important components: the DFEM and the IGM. The DFEM improves the performance of SDGM by effectively suppressing the noise in the global information. Meanwhile, the IGM is positioned at the detection head to enhance the generalization ability by normalizing and whitening the category information for each object. We evaluate the SDGM on the INbreast and CBIS-DDSM datasets with multiple benchmarks to evaluate its efficiency. The SDGM can handle domain shift and perform well even with limited labeled medical data, mitigating challenges in medical image analysis. Additionally, the SDGM exhibits robustness across different environmental conditions. In summary, the SDGM offers a promising solution to improving breast tumor detection in X-ray images, making a valuable impact on clinical practice.
X-ray乳腺肿瘤检测单域泛化域偏移正则化与白化特征增强
breast tumor detection in X-ray imagessingle-domain generalizationdomain shiftnormalization and whiteningfeature enhancement
Aly G H, Marey M, El-Sayed S A and Tolba M F. 2021. YOLO based breast masses detection and classification in full-field digital mammograms. Computer Methods and Programs in Biomedicine, 200: #105823 [DOI: 10.1016/j.cmpb.2020.105823http://dx.doi.org/10.1016/j.cmpb.2020.105823]
Cai Z W and Vasconcelos N. 2018. Cascade R-CNN: delving into high quality object detection//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 6154-6162 [DOI: 10.1109/CVPR.2018.00644http://dx.doi.org/10.1109/CVPR.2018.00644]
Chen Y H, Li W, Sakaridis C, Dai D X and Van Gool L. 2018. Domain adaptive Faster R-CNN for object detection in the wild//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 3339-3348 [DOI: 10.1109/CVPR.2018.00352http://dx.doi.org/10.1109/CVPR.2018.00352]
Choi S, Jung S, Yun H W N, Kim J T, Kim S and Choo J. 2021. RobustNet: improving domain generalization in urban-scene segmentation via instance selective whitening//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 11575-11585 [DOI: 10.1109/CVPR46437.2021.01141http://dx.doi.org/10.1109/CVPR46437.2021.01141]
Dai J F, Qi H Z, Xiong Y W, Li Y, Zhang G D, Hu H and Wei Y C. 2017. Deformable convolutional networks//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 764-773 [DOI: 10.1109/ICCV.2017.89http://dx.doi.org/10.1109/ICCV.2017.89]
Dai X Y, Chen Y P, Xiao B, Chen D D, Liu M C, Yuan L and Zhang L. 2021. Dynamic head: unifying object detection heads with attentions//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 7369-7378 [DOI: 10.1109/CVPR46437.2021.00729http://dx.doi.org/10.1109/CVPR46437.2021.00729]
Desjardins G, Simonyan K and Pascanu R. 2015. Natural neural networks//Proceedings of the 29th International Conference on Neural Information Processing Systems. Montreal, Canada: 2071-2079
Fan X J, Wang Q F, Ke J J, Yang F, Gong B Q and Zhou M Y. 2021. Adversarially adaptive normalization for single domain generalization//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 8204-8213 [DOI: 10.1109/CVPR46437.2021.00811http://dx.doi.org/10.1109/CVPR46437.2021.00811]
Feng C J, Zhong Y J, Gao Y, Scott M R and Huang W L. 2021. TOOD: task-aligned one-stage object detection//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE: 3490-3499 [DOI: 10.1109/ICCV48922.2021.00349http://dx.doi.org/10.1109/ICCV48922.2021.00349]
Ganin Y and Lempitsky V. 2015. Unsupervised domain adaptation by backpropagation//Proceedings of the 32nd International Conference on Machine Learning. Lille, France: PMLR: 1180-1189
Huang L, Yang D W, Lang B and Deng J. 2018. Decorrelated batch normalization//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 791-800 [DOI: 10.1109/CVPR.2018.00089http://dx.doi.org/10.1109/CVPR.2018.00089]
Ibrokhimov B and Kang J Y. 2022. Two-stage deep learning method for breast cancer detection using high-resolution mammogram images. Applied Sciences, 12(9): #4616 [DOI: 10.3390/app12094616http://dx.doi.org/10.3390/app12094616]
Ioffe S and Szegedy C. 2015. Batch normalization: accelerating deep network training by reducing internal covariate shift//Proceedings of the 32nd International Conference on Machine Learning. Lille, France: PMLR: 448-456
Kolchev A, Pasynkov D, Egoshin I, Kliouchkin I, Pasynkova O and Tumakov D. 2022. YOLOv4-based CNN model versus nested contours algorithm in the suspicious lesion detection on the mammography image: a direct comparison in the real clinical settings. Journal of Imaging, 8(4): #88 [DOI: 10.3390/jimaging8040088http://dx.doi.org/10.3390/jimaging8040088]
Kong T, Sun F C, Liu, H P, Jiang Y N, Li L and Shi J B. 2020. Foveabox: beyound anchor-based object detection. IEEE Transactions on Image Processing, 29: 7389-7398 [DOI: 10.1109/TIP.2020.3002345http://dx.doi.org/10.1109/TIP.2020.3002345]
Lee R S, Gimenez F, Hoogi A, Miyake K K, Gorovoy M and Rubin D L. 2017. A curated mammography data set for use in computer-aided detection and diagnosis research. Scientific Data, 4: #170177 [DOI: 10.1038/sdata.2017.177http://dx.doi.org/10.1038/sdata.2017.177]
Li D, Zhang J S, Yang Y X, Liu C, Song Y Z and Hospedales T M. 2019. Episodic training for domain generalization//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 1446-1455 [DOI: 10.1109/ICCV.2019.00153http://dx.doi.org/10.1109/ICCV.2019.00153]
Li J L, Xu R S, Ma J, Zou Q, Ma J Q and Yu H K. 2023. Domain adaptive object detection for autonomous driving under foggy weather//Proceedings of 2023 IEEE/CVF Winter Conference on Applications of Computer Vision. Waikoloa, USA: IEEE: 612-622 [DOI: 10.1109/WACV56688.2023.00068http://dx.doi.org/10.1109/WACV56688.2023.00068]
Li P, Li D, Li W, Gong S G, Fu Y W and Hospedales T M. 2021. A simple feature augmentation for domain generalization//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 8866-8875 [DOI: 10.1109/ICCV48922.2021.00876http://dx.doi.org/10.1109/ICCV48922.2021.00876]
Li W Y, Li F Y, Luo Y K, Wang P and Sun J. 2020. Deep domain adaptive object detection: a survey//Proceedings of 2020 IEEE Symposium Series on Computational Intelligence (SSCI). Canberra, Australia: IEEE: 1808-1813 [DOI: 10.1109/SSCI47803.2020.9308604http://dx.doi.org/10.1109/SSCI47803.2020.9308604]
Li Y J, Fang C, Yang J M, Wang Z W, Lu X and Yang M H. 2017. Universal style transfer via feature transforms//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc.: 385-395
Lin T Y, Goyal P, Girshick R, He K M and Dollr P. 2017. Focal loss for dense object detection//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 2999-3007 [DOI: 10.1109/ICCV.2017.324http://dx.doi.org/10.1109/ICCV.2017.324]
Liu D N, Zhang C Y, Song Y, Huang H, Wang C Y, Barnett M and Cai W D. 2023. Decompose to adapt: cross-domain object detection via feature disentanglement. IEEE Transactions on Multimedia, 25: 1333-1344 [DOI: 10.1109/TMM.2022.3141614http://dx.doi.org/10.1109/TMM.2022.3141614]
Liu Z W, Miao Z Q, Pan X G, Zhan X H, Lin D H, Yu S X and Gong B Q. 2020. Open compound domain adaptation//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 12403-12412 [DOI: 10.1109/CVPR42600.2020.01242http://dx.doi.org/10.1109/CVPR42600.2020.01242]
Marmot M G, Altman D G, Cameron D A, Dewar J A, Thompson S G and Wilcox M. 2013. The benefits and harms of breast cancer screening: an independent review. British Journal of Cancer, 108(11): 2205-2240 [DOI: 10.1038/bjc.2013.177http://dx.doi.org/10.1038/bjc.2013.177]
Mattolin G, Zanella L, Ricci E and Wang Y M. 2023. ConfMix: unsupervised domain adaptation for object detection via confidence-based mixing//Proceedings of 2023 IEEE/CVF Winter Conference on Applications of Computer Vision. Waikoloa, USA: IEEE: 423-433 [DOI: 10.1109/WACV56688.2023.00050http://dx.doi.org/10.1109/WACV56688.2023.00050]
Moreira I C, Amaral I, Domingues I, Cardoso A, Cardoso M J and Cardoso J S. 2012. INbreast: toward a full-field digital mammographic database. Academic Radiology, 19(2): 236-248 [DOI: 10.1016/j.acra.2011.09.014http://dx.doi.org/10.1016/j.acra.2011.09.014]
Pan X G, Zhan X H, Shi J P, Tang X O and Luo P. 2019. Switchable whitening for deep representation learning//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 1863-1871 [DOI: 10.1109/ICCV.2019.00195http://dx.doi.org/10.1109/ICCV.2019.00195]
Qiao F C, Zhao L and Peng X. 2020. Learning to learn single domain generalization//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 12553-12562 [DOI: 10.1109/CVPR42600.2020.01257http://dx.doi.org/10.1109/CVPR42600.2020.01257]
Quy H D, Son N N and Anh H P H. 2023. DeYOLOv3: an optimal mass detector for advanced breast cancer diagnostics//Huang Y P, Wang W J, Quoc H A, Le H G and Quach H N, eds. Computational Intelligence Methods for Green Technology and Sustainable Development. Cham, Germany: Springer: 325-335 [DOI: 10.1007/978-3-031-19694-2_29http://dx.doi.org/10.1007/978-3-031-19694-2_29]
Razali N F, Isa I S, Sulaiman S N, Abdul Karim N K, Osman M K and Che Soh Z H. 2023. Enhancement technique based on the breast density level for mammogram for computer-aided diagnosis. Bioengineering, 10(2): #153 [DOI: 10.3390/bioengineering10020153http://dx.doi.org/10.3390/bioengineering10020153]
Ren S Q, He K M, Girshick R and Sun J. 2015. Faster R-CNN: towards real-time object detection with region proposal networks//Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press: 91-99
Saito K, Ushiku Y, Harada T and Saenko K. 2019. Strong-weak distribution alignment for adaptive object detection//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 6949-6958 [DOI: 10.1109/CVPR.2019.00712http://dx.doi.org/10.1109/CVPR.2019.00712]
Song G L, Liu Y and Wang X G. 2020. Revisiting the sibling head in object detector//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 11560-11569 [DOI: 10.1109/CVPR42600.2020.01158http://dx.doi.org/10.1109/CVPR42600.2020.01158]
Su Y Y, Liu Q, Xie W T and Hu P Z. 2022. YOLO-LOGO: a transformer-based YOLO segmentation model for breast mass detection and segmentation in digital mammograms. Computer Methods and Programs in Biomedicine, 221: #106903 [DOI: 10.1016/j.cmpb.2022.106903http://dx.doi.org/10.1016/j.cmpb.2022.106903]
Tian Z, Shen C H, Chen H and He T. 2019. FCOS: fully convolutional one-stage object detection//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 9626-9635 [DOI: 10.1109/ICCV.2019.00972http://dx.doi.org/10.1109/ICCV.2019.00972]
Ulyanov D, Vedaldi A and Lempitsky V. 2017. Improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 4105-4113 [DOI: 10.1109/CVPR.2017.437http://dx.doi.org/10.1109/CVPR.2017.437]
Volpi R, Namkoong H, Sener O, Duchi J, Murino V and Savarese S. 2018. Generalizing to unseen domains via adversarial data augmentation//Proceedings of the 32nd International Conference on Neural Information Processing Systems. Montreal, Canada: 5339-5349
Wang L, Cao Y, Guo S C, Tang L, Kuai Z X, Wang R P and Wang L H. 2021. Semantic Laplacian pyramids network for multicenter breast tumor segmentation. Journal of Image and Graphics, 26(9): 2193-2207
王黎, 曹颖, 郭顺超, 唐雷, 郐子翔, 王荣品, 王丽会. 2021. 语义拉普拉斯金字塔多中心乳腺肿瘤分割网络. 中国图象图形学报, 26(9): 2193-2207 [DOI: 10.11834/jig.210138http://dx.doi.org/10.11834/jig.210138]
Wang W H, Xie E Z, Li X, Fan D P, Song K T, Liang D, Lu T, Luo P and Shao L. 2022. PVT v2: improved baselines with pyramid vision transformer. Computational Visual Media, 8(3): 415-424 [DOI: 10.1007/s41095-022-0274-8http://dx.doi.org/10.1007/s41095-022-0274-8]
Wu A M and Deng C. 2022. Single-domain generalized object detection in urban scene via cyclic-disentangled self-distillation//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE: 837-846 [DOI: 10.1109/CVPR52688.2022.00092http://dx.doi.org/10.1109/CVPR52688.2022.00092]
Zhang C, Li Z X, Liu J J, Peng P X, Ye Q X, Lu S J, Huang T J and Tian Y H. 2022b. Self-guided adaptation: progressive representation alignment for domain adaptive object detection. IEEE Transactions on Multimedia, 24: 2246-2258 [DOI: 10.1109/TMM.2021.3078141http://dx.doi.org/10.1109/TMM.2021.3078141]
Zhang H, Wu C R, Zhang Z Y, Zhu Y, Lin H B, Zhang Z, Sun Y, He T, Mueller J, Manmatha R, Li M and Smola A. 2022c. ResNeSt: split-attention networks//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. New Orleans, USA: IEEE: 2735-2745 [DOI: 10.1109/CVPRW56347.2022.00309http://dx.doi.org/10.1109/CVPRW56347.2022.00309]
Zhang L L, Li Y F, Chen H J, Wu W, Chen K and Wang S K. 2022a. Anchor-free YOLOv3 for mass detection in mammogram. Expert Systems with Applications, 191: #116273 [DOI: 10.1016/j.eswa.2021.116273http://dx.doi.org/10.1016/j.eswa.2021.116273]
Zhang S F, Chi C, Yao Y Q, Lei Z and Li S Z. 2020. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 9756-9765 [DOI: 10.1109/CVPR42600.2020.00978http://dx.doi.org/10.1109/CVPR42600.2020.00978]
Zhao L and Wang L M. 2022. Task-specific inconsistency alignment for domain adaptive object detection//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE: 14197-14206 [DOI: 10.1109/CVPR52688.2022.01382http://dx.doi.org/10.1109/CVPR52688.2022.01382]
Zhao X, Gong X, Fan L and Luo J. 2022. Attention-based networks of human breast bimodal ultrasound imaging classification. Journal of Image and Graphics, 27(3): 911-922
赵绪, 龚勋, 樊琳, 罗俊. 2022. 结合注意力机制的乳腺双模态超声分类网络. 中国图象图形学报, 27(3): 911-922 [DOI: 10.11834/jig.210370http://dx.doi.org/10.11834/jig.210370]
Zhou K Y, Liu Z W, Qiao Y, Xiang T and Loy C C. 2023. Domain generalization: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4): 4396-4415 [DOI: 10.1109/TPAMI.2022.3195549http://dx.doi.org/10.1109/TPAMI.2022.3195549]
相关作者
相关机构