持续测试时域自适应图象分类方法研究
Research on Continual Testing Time Domain Adaptive Image Classification Method
- 2025年 页码:1-15
收稿日期:2024-12-04,
修回日期:2025-02-13,
录用日期:2025-02-25,
网络出版日期:2025-02-26
DOI: 10.11834/jig.240739
移动端阅览
浏览全部资源
扫码关注微信
收稿日期:2024-12-04,
修回日期:2025-02-13,
录用日期:2025-02-25,
网络出版日期:2025-02-26,
移动端阅览
目的
2
持续测试时适应(Continual Test-Time Adaption)旨在不使用任何源数据的情况下,使源预训练模型适应持续变化的目标域。目前持续测试时适应主要依赖于自训练方法,在基于平均教师模型框架下将数据增强后样本的预测值作为伪标签,构建一致性损失函数实现模型的自训练。然而,本文通过实验发现,现有方法中使用随机数据增强策略忽视了域间差异的重要性,导致模型稳定性和泛化性失衡等问题,使得在某些域间进行知识转移变得更具挑战性。为此,本文提出了一种面向域间差异的持续测试时适应方法,聚焦于计算机视觉领域中的图像分类任务,探讨如何通过持续测试时适应技术提升模型对新域的适应能力。
方法
2
首先,提出一种基于域间差异的弹性数据增强策略。通过构建表示域间特征风格的Gram矩阵,计算相邻域间的差异,选取合适的弹性因子控制数据增强的强度,在数据预处理层面考虑域间差异性,使模型能更好地适应域复杂多变的情况。其次,提出一种全局弹性对称交叉熵损失函数。将基于域间差异计算取得的弹性因子应用于伪标签生成以及一致性损失函数的构建中,在模型优化层面考虑域间差异性,增强模型对不同域变化下的理解和适应能力。最后,提出一种基于置信度的伪标签自纠错策略。在弹性数据增强下,强数据增强是通过对原始数据进行较大程度的变换来实现,模型在预测过程中可能面临预测偏差的问题,而弱数据增强涉及较小程度的变换,不会显著改变基本特征,模型对其预测的置信度较高。该策略利用高置信度的弱数据增强预测值对强数据增强的预测值进行自纠错,减少误差积累现象。
结果
2
本文在CIFAR10-C、CIFAR100-C和ImageNet-C三个数据集上与多种先进算法进行比较。在CIFAR10-C数据集上,本文算法相较于基线方法Cotta,错误率降低了约2.3%;在CIFAR-100数据集上,算法相较于基线方法Cotta,错误率降低了约2.7%;在ImageNet-C数据集上,算法在对比实验中错误率降低了约3.6%。同时本文在CIFAR10-C数据集中进行了消融实验,进一步验证各个模块的有效性。此外,为了符合更实际的域变化场景,本文在CIFAR100-C设计了域随机输入实验,结果显示本文的方法在域随机输入的情况下错误率低于现有方法,对比基线平均错误率降低了3.9%,证明了本文方法可以有效地评估域间关系,并部署灵活策略以提升模型对持续变化目标域的适应能力。
结论
2
本文算法平衡了模型在持续测试时适应场景中的泛化性和稳定性,并且有效减少了误差积累现象。
Objective
2
In recent years, Deep Neural Networks (DNNs) have demonstrated exceptional performance in numerous computer vision tasks, including image classification, dense prediction, and image segmentation. The success of these tasks largely depends on the consistency between the test data distribution and the training data. However, in real-world application scenarios, consistency is often disrupted due to unknown changes in factors such as weather and lighting, leading to increased domain diversity and resulting in poor generalization and performance degradation of deployed models. To address this challenge, the concept of Continuous Test-Time Adaptation (CTTA) has been proposed, aiming to enable pre-trained models to adapt to the evolving target data distribution. CTTA is designed to adapt source pre-trained models to the continuously changing target domain without using any source data. Existing CTTA primarily focuses on self-training model adaptation, employing pseudo-labels from model-predicted data-augmented samples within the Mean Teacher framework to achieve self-training. However, this paper posits that the essence of continuous test-time adaptation is the transformation of inter-domain feature styles. Through experiments, this paper has found that existing methods, which use random data augmentation strategies, overlook the importance of inter-domain differences. The use of simple and singular data augmentation leads to issues such as insufficient model stability and generalization, making knowledge transfer across certain domains difficult. To this end, this paper proposes a continuous test-time adaptation method that addresses inter-domain inconsistency differences, focusing on image classification tasks in the field of computer vision, explore how to improve the adaptability of models to new domains through continuous testing adaptation techniques.
Method
2
Firstly, this paper posits that the essence of domain variation lies in the variation of feature styles between domains, acknowledging that there are differences in features across target domains. Consequently, the paper aims to construct a flexible data augmentation strategy based on domain differences. Calculating inter-domain differences is a key technique for measuring the distributional differences between features of different domains, which is particularly important in the field of continuous test-time adaptation. The Gram matrix, as a tool for assessing feature style differences, has been widely applied in this domain. By optimizing the use of the Gram matrix, it becomes possible to more accurately quantify and understand the differences in feature distribution between domains, and to calculate the appropriate elasticity factor for subsequent flexible data augmentation operations. This approach considers inter-domain differences from a data preprocessing perspective, enabling the model to adapt flexibly to continuously changing domains. Secondly, based on the differences in inter-domain feature styles, the paper proposes a global elastic symmetric cross-entropy consistency loss function. This function incorporates the elasticity factor, calculated based on inter-domain differences, into the pseudo-label and loss function levels. It considers inter-domain differences from the perspective of model training optimization, constructing a global elastic symmetric cross-entropy loss function that balances model generalization and stability. The elastic symmetric cross-entropy loss function is a loss function that combines differences in inter-domain feature styles, adaptively adjusting pseudo-label outputs by constructing a flexible data augmentation strategy. The goal of this method is to achieve a balance between the generalization and stability of the model. Specifically, it enhances the model's adaptability to new domains by dynamically adjusting the pseudo-labels and the weights between forward and backward cross-entropy based on differences in inter-domain feature styles. Lastly, a confidence-based pseudo-label correction strategy is proposed. Since the paper implements controllable elastic data augmentation, it enhances data elastically according to the degree of inter-domain differences, producing a diverse set of outputs. However, elastic data augmentation may result in a large number of strongly augmented samples, which can obscure sample characteristics, making it difficult for the model to correctly predict the true labels of strongly augmented samples, leading to low-quality pseudo-labels and error accumulation. Therefore, using high-confidence weak data augmentation predictions to correct strong data augmentation predictions reduces the issue of low-quality pseudo-labels caused by high-intensity data augmentation during the continuous test-time adaptation phase, effectively suppressing the accumulation of errors.
Results
2
This paper conducted comprehensive comparative experiments on the CIFAR10-C, CIFAR100-C, and ImageNet-C datasets, comparing with various advanced algorithms. The experimental results indicate that the algorithm proposed in this paper has achieved significant performance improvements over the baseline method Cotta on these datasets. Specifically, on the CIFAR10-C dataset, the error rate was reduced by about 2.3%; on the CIFAR100 dataset, the error rate was reduced by about 2.7%; and on the ImageNet-C dataset, the error rate was reduced by about 3.6%. These results demonstrate that the algorithm can effectively enhance the robustness and accuracy of the model across datasets of varying difficulty and complexity. Further ablation experiments were conducted on the CIFAR10-C dataset to verify the effectiveness of each module within the algorithm. Ablation experiments are a method of controlling variables, where modules are added individually or in combination to test their impact on overall performance. This approach helps to understand the specific contributions of each module to the algorithm's performance, providing a basis for algorithm optimization. Additionally, to more closely align with real-world domain variation scenarios, the paper designed experiments with random domain inputs. The experimental results show that the elastic symmetric cross-entropy based on domain difference detection has a lower error rate under random domain input conditions compared to existing methods, proving the effectiveness of the approach. This is crucial because, in the real world, models often need to make predictions in constantly changing environments, and this enhancement in capability means that the model can better adapt to unknown domain changes, thereby maintaining high performance in practical applications.
Conclusion
2
The algorithm presented in this paper effectively balances the generalization and stability of the model in scenarios of continuous test-time adaptation, while significantly reducing the accumulation of errors during the test-time adaptation process. This balance is crucial for machine learning models to maintain performance in dynamically changing environments. Our research focuses not only on the theoretical foundations of the algorithm but also on its effectiveness in practical applications.
Boudiaf M , Denton T , Van Merriënboer B , Dumoulin V and Triantafillou E . 2023 . In search for a generalizable method for source free domain adaptation // Proceedings of the 40th International Conference on Machine Learning . Honolulu, Hawaii, USA : PMLR: 2914 - 2931 [ DOI: 10.48550/arXiv.2302 http://dx.doi.org/10.48550/arXiv.2302 .
06658]
Brahma D and Rai P . 2023 . A probabilistic framework for lifelong test-time adaptation // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Vancouver, Canada : CVPR: 3582 - 3591 [ DOI: 10.1609/aaai.v38i10.29018 http://dx.doi.org/10.1609/aaai.v38i10.29018 ]
Chen T , Duan Y , Li D , Qi L , Shi Y and Gao Y . 2024 . PG-LBO: Enhancing High-Dimensional Bayesian Optimization with Pseudo-Label and Gaussian Process Guidance // Proceedings of the AAAI Conference on Artificial Intelligence . Vancouver, Canada : AAAI: 11381 - 11389 [ DOI: 10.48550/arXiv.2312 http://dx.doi.org/10.48550/arXiv.2312 .
16983]
Cao N and Saukh O . 2023 . Geometric Data Augmentations to Mitigate Distribution Shifts in Pollen Classification from Microscopic Images [EB/OL].[ 2023-11-18 ]. https://arxiv.org/ https://arxiv.org/
abs/ 2311 . 11029 . pdf
Chen D , Wang D , Darrell T and Ebrahimi S . 2022 . Contrastive test-time adaptation // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . New Orleans, LA, USA : CVPR: 295 - 305 [ DOI: 10.1109/CVPR52688.2022.00039 http://dx.doi.org/10.1109/CVPR52688.2022.00039 ]
Croce F , Andriushchenko M , Sehwag V , Debenedetti E , Flammarion N , Chiang M , Mittal P and Hein M . 2020 . Robustbench: a standardized adversarial robustness benchmark [EB/OL]. [ 2021-10-31 ]. https://arxiv.org/abs/2010.09670.pdf https://arxiv.org/abs/2010.09670.pdf
Döbler M , Marsden R A and Yang B . 2023 . Robust mean teacher for continual and gradual test-time adaptation // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Vancouver, Canada : CVPR: 7704 - 7714 [ DOI: 10.1109/CVPR52729.2023.00744 http://dx.doi.org/10.1109/CVPR52729.2023.00744 ]
De Lange M , Aljundi R , Masana M , Parisot S , Jia X , Leonardis A , Slabaugh G and Tuytelaars T . 2021 . A continual learning survey: Defying forgetting in classification tasks . IEEE transactions on pattern analysis and machine intelligence , 44 ( 7 ): 3366 - 3385 [ DOI: 10.1109/TPAMI.2021.3057446 http://dx.doi.org/10.1109/TPAMI.2021.3057446 ]
Hou F , Yuan J , Yang Y , Liu Y , Zhang Y , Zhong C , Shi Z , Fan J , Rui Y and He Z . 2024 . DomainVerse: A Benchmark Towards Real-World Distribution Shifts For Tuning-Free Adaptive Domain Generalization [EB/OL].[ 2024-3-5 ]. https://arxiv.org https://arxiv.org
/abs/2403.02714 . pdf
Jin D , Jin Z , Hu Z , Vechtomova O and Mihalcea R . 2022 . Deep learning for text style transfer: A survey . Computational Linguistics , 48 ( 1 ): 155 - 205 [ DOI: 10.1162/coli_a_00426 http://dx.doi.org/10.1162/coli_a_00426 ]
Kundu J N , Venkat N and Babu R V . 2020 . Universal source-free domain adaptation // Proceedings of the IEEE/CVF conference on computer vision and pattern recognition . Seattle, WA, USA : CVPR: 4544 - 4553 [ DOI: 10.1109/CVPR42600.2020.00460 http://dx.doi.org/10.1109/CVPR42600.2020.00460 ]
Li W B , Xiong Y K , Fan Z C , Deng B , Cao F Y and Gao Y . 2024 . Research progress and trends in continuous learning . Journal of Computer Research and Development , 61 ( 06 ) : 1476 - 1496
李文斌 , 熊亚锟 , 范祉辰 , 邓波 , 曹付元 , 高阳 . 2024 . 持续学习的研究进展与趋势 [J]. 计算机研究与发展 ,2024, 61 ( 06 ): 1476 - 1496 [ DOI: 10.7544/issn1000-1239. 202220820 http://dx.doi.org/10.7544/issn1000-1239.202220820 ]
Kumar A , Ma T and Liang P . 2020 . Understanding self-training for gradual domain adaptation // Proceedings of the 37th International Conference on Machine Learning . Vienna, Austria : PMLR: 5468 - 5479 [ DOI: 10.48550/arXiv.2002.11361 http://dx.doi.org/10.48550/arXiv.2002.11361 ]
Lee D , Yoon J and Hwang S J . 2024 . BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation .[EB/OL].[ 2024-5-31 ]. https://arxiv.org/abs/2402 https://arxiv.org/abs/2402 .
08712 . pdf
Liu J , Xu R , Yang S , Zhang R , Zhang Q , Chen Z , Guo Y and Zhang S . 2024 . Adaptive distribution masked autoencoders for continual test-time adaptation // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle, WA, USA : CVPR: 28653 - 28663 [ DOI: 10.1109/CVPR http://dx.doi.org/10.1109/CVPR
52733 . 2024 .02707]
Luo Y , Zheng L , Guan T , Yu J and Yang Y . 2019 . Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation // Proceedings of the IEEE/CVF conference on computer vision and pattern recognition . Long Beach, CA, USA. CVPR: 2507 - 2516 [ DOI: 10.1109/CVPR http://dx.doi.org/10.1109/CVPR .
2019 . 00261 ]
Li Y , Fang C , Yang J , Wang Z , Lu X and Yang M H . 2017 . Universal style transfer via feature transforms // Proceedings of the 31th International Conference on Neural Information Processing Systems . New Orleans, USA : NIPS: 385 - 395 [ DOI: 10.48550/ http://dx.doi.org/10.48550/
arXiv . 1705 . 08086 ]
Long M , Cao Y , Wang J and Jordan M . 2015 . Learning transferable features with deep adaptation networks // Proceedings of the 32nd International Conference on Machine Learning , Lille, France : PMLR: 97 - 105 [ DOI: 10.48550/arXiv.1502.02791 http://dx.doi.org/10.48550/arXiv.1502.02791 ]
Laskin M , Srinivas A and Abbeel P . 2020 . Curl: Contrastive unsupervised representations for reinforcement learning // Proceedings of the 37th International Conference on Machine Learning , Vienna, Austria : PMLR: 5639 - 5650 [ DOI: 10.48550 http://dx.doi.org/10.48550
/arXiv . 2004 . 04136 ]
Liu Y , Kothari P , Van Delft B , Bellot-Gurlet B , Mordan T and Alahi A . 2021 . Ttt++: When does self-supervised test-time training fail or thrive // Proceedings of the 36th International Conference on Neural Information Processing Systems . Beijing, China : NIPS: 21808 - 21820 [ DOI: 10.19320/arXiv.2021.04416 http://dx.doi.org/10.19320/arXiv.2021.04416 ]
Marsden R A , Döbler M and Yang B . 2024 . Introducing intermediate domains for effective self-training during test-time // Proceedings of the 34th International Joint Conference on Neural Networks . Yokohama, Japan : IJCNN: 1 - 10 [ DOI: 10.1109/IJCNN60899.2024.10651501 http://dx.doi.org/10.1109/IJCNN60899.2024.10651501 ]
Gao C X , Xu Z Z , Wu D Y , Yu C Q and Sang N . 2024 . Deep learning-based real-time semantic segmentation:a survey . Journal of Image and Graphics , 29 ( 05 ): 1119 - 1145
高常鑫 , 徐正泽 , 吴东岳 , 余昌黔 , 桑农 . 2024 . 深度学习实时语义分割综述 . 中国图象图形学报 , 29 ( 05 ): 1119 - 1145 [ DOI: 10. 11834/jig. 230659 http://dx.doi.org/10.11834/jig.230659 ]
Muandet K , Balduzzi D and Schölkopf B . 2013 . Domain generalization via invariant feature representation // Proceedings of the 30th International Conference on Machine Learning , Atlanta, Georgia, USA : PMLR: 10 - 18 [ DOI: 10.48550/arXiv http://dx.doi.org/10.48550/arXiv .
2004 . 04393 ]
Lee D , Yoon J and Hwang S J . 2024 . BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation [EB/OL].[ 2024-3-21 ]. https://arxiv.org/abs/2402.08712.pdf https://arxiv.org/abs/2402.08712.pdf
Na J , Ha J W , Chang H J , Han D and Hwang W . 2024 . Switching temporary teachers for semi-supervised semantic segmentation . // Proceedings of the 38th International Conference on Neural Information Processing Systems . Vancouver Canada: NIPS: 40367--40380 [ DOI: 10.48550/arXiv.2004.04393 http://dx.doi.org/10.48550/arXiv.2004.04393 ]
Patel V M , Gopalan R , Li R and Chellappa R . 2015 . Visual domain adaptation: A survey of recent advances . IEEE signal processing magazine , 32 ( 3 ): 53 - 69 [ DOI: doi.org/10.1016/ http://dx.doi.org/doi.org/10.1016/
j.neucom . 2018 . 05 . 083 ]
Parisi G I , Kemker R , Part J L , Kanan C and Wermter S . 2019 . Continual lifelong learning with neural networks: A review . Neural networks . 113 ( 2 ): 54 - 71 [ DOI: doi.org/10.1016/i.neunet http://dx.doi.org/doi.org/10.1016/i.neunet .
2019 . 01 . 012 .]
Qu S , Zou T , He L , Röhrbein F , Knoll A , Chen G and Jiang C . 2024 . Lead: Learning decomposition for source-free universal domain adaptation .// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle, WA, USA : CVPR: 23334 - 23343 [ DOI: 10.1109/CVPR52733.2024 http://dx.doi.org/10.1109/CVPR52733.2024 .
02202]
Stacke K , Eilertsen G , Unger J and Lundström C . 2020 . Measuring domain shift for deep learning in histopathology . IEEE journal of biomedical and health informatics . 25 ( 2 ): 325 - 336 [ DOI: 10.1109/JBHI.2020.3032060 http://dx.doi.org/10.1109/JBHI.2020.3032060 ]
Sastry C S and Oore S . 2020 . Detecting out-of-distribution examples with gram matrices // Proceedings of the 37th International Conference on Machine Learning . Vienna, Austria : PMLR: 8491 - 8501 [ DOI: doi.org/10.48550/arXiv.1912.12510 http://dx.doi.org/doi.org/10.48550/arXiv.1912.12510 ]
Dequan Wang , Evan Shelhamer , Shaoteng Liu , Bruno Olshausen , and Trevor Darrell . Tent: Fully test-time adaptation by entropy minimization // Proceedings of International Conference on Learning Representations . Austria, Vienna : ICLR: 1 - 8 [ DOI: 10.48550/arxiv.2006.10726 http://dx.doi.org/10.48550/arxiv.2006.10726 ]
Shorten C and Khoshgoftaar T M . 2019 . A survey on image data augmentation for deep learning . Journal of big data . 6 ( 1 ): 1 - 48 . [ DOI: 10.48550/arXiv.2204.08610 http://dx.doi.org/10.48550/arXiv.2204.08610 ]
Sakaridis C , Dai D and Van Gool L . 2021 . ACDC: The adverse conditions dataset with correspondences for semantic driving scene understanding // Proceedings of the IEEE/CVF International Conference on Computer Vision . Nashville, TN, USA : CVPR: 10765 - 10775 [ DOI: 10.48550/arXiv.2104.13395 http://dx.doi.org/10.48550/arXiv.2104.13395 ]
Tan J , Lyu F , Ni C , Feng T , Hu F , Zhang Z , Zhao S , Wang L . 2024 . Less is More: Pseudo-Label Filtering for Continual Test-Time Adaptation [EB/OL]. [ 2024-7-12 ]. https://arxiv.org/abs/2406 https://arxiv.org/abs/2406 .
02609 . pdf
Tarvainen A and Valpola H . 2016 . Weight-averaged consistency targets improve semi-supervised deep learning results // Proceedings of the 31st International Conference on Neural Information Processing Systems . Curran Associates Inc . Red Hook, NY, USA : NIPS: 1195 – 1204 [ DOI: 10.48550/arXiv http://dx.doi.org/10.48550/arXiv .
1703 . 01780 ]
Wang Q , Fink O , Van Gool L and Dai D . 2022 . Continual test-time domain adaptation // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . New Orleans, LA, USA : CVPR: 7201 - 7211 [ DOI: 10.48550/arXiv.2203.13591 http://dx.doi.org/10.48550/arXiv.2203.13591 ]
Wu H , Zhao S , Huang X , Wen C , Li X and Wang C . 2024 . Commonsense Prototype for Outdoor Unsupervised 3D Object Detection // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle . WA. USA : CVPR: 14968 - 14977 [ DOI: 10.48550/arXiv.2404.16493 http://dx.doi.org/10.48550/arXiv.2404.16493 ]
Yang S , Wang Y , Van De Weijer J , Herranz L and Jui S . 2021 . Generalized source-free domain adaptation // Proceedings of the IEEE/CVF international conference on computer vision . Nashville, TN, USA : CVPR: 8978 - 8987 [ DOI: 10.48550/arXiv http://dx.doi.org/10.48550/arXiv .
2108 . 01614 ]
Yin D , Gontijo Lopes R , Shlens J , Cubuk E D and Gilmer J . 2019 . A fourier perspective on model robustness in computer vision . // Proceedings of the 33th International Conference on Neural Information Processing Systems . Vancouver, Canada : NIPS: 13276 - 13286 [ DOI: 10.48550/arXiv.1906.08988 http://dx.doi.org/10.48550/arXiv.1906.08988 ]
Zhu Z , Hong X , Ma Z , Zhuang W , Ma Y , Dai Y and Wang Y . 2025 . Reshaping the Online Data Buffering and Organizing Mechanism for Continual Test-Time Adaptation // Proceedings of International Conference In European Conference on Computer Vision . Nashville, Tennessee, United State : ECCV: 415 - 433 [ DOI: 10.48550/arXiv.2407.09367 http://dx.doi.org/10.48550/arXiv.2407.09367 ]
Zhang L , Wang Z , He J and Li Y . 2023 . New image processing: VGG image style transfer with gram matrix style features // Proceedings of the 5th International Conference on Artificial Intelligence and Computer Applications . Dalian, China : ICAICA: 468 - 472 [ DOI: 10.1109/ICAICA58456.2023 http://dx.doi.org/10.1109/ICAICA58456.2023 .
10405398]
Maharana S K , Zhang B and Guo Y . 2024 . PALM: Pushing Adaptive Learning Rate Mechanisms for Continual Test-Time Adaptation [EB/OL].[ 2024-11-19 ]. https://arxiv.org/abs/2403.10650.pdf https://arxiv.org/abs/2403.10650.pdf
Zagoruyko S . 2016 . Wide residual networks [EB/OL]. [ 2017-6-14 ]. https://arxiv.org/abs/1605.07146.pdf https://arxiv.org/abs/1605.07146.pdf
Zhou D W , Wang F Y , Ye H J and Zhan D C . 2023 . An overview of category incremental learning algorithms based on deep learning . Chinese Journal of Computers , 46 ( 08 ): 1577 - 1605
周大蔚 , 汪福运 , 叶翰嘉 , 詹 德 川 . 2023 . 基于深度学习的类别增量学习算法综述 [J]. 计算机学报 ,2023, 46 ( 08 ): 1577 - 1605 [ DOI: 10.11897/SP.J.1016.2023.01577 http://dx.doi.org/10.11897/SP.J.1016.2023.01577 ]
Wang Z and Qu S J . 2024 . Research progress and challenges in real-time semantic segmentation for deep learning . Journal of Image and Graphics , 29 ( 05 ): 1188 - 1220
王卓 , 瞿绍军 . 2024 . 深度学习实时语义分割研究进展和挑战 . 中国图象
图形学报 , 29 ( 05 ): 1188 - 1220 [ DOI: 10. 11834/jig. 230605 http://dx.doi.org/10.11834/jig.230605 ]
Wang Y S , Hong J , Cheraghian A , Rahman S , Ahmedt-Aristizabal D , Petersson L and Harandi M . 2024 . Continual test-time domain adaptation via dynamic sample selection // Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision . Tucson, Arizona , USB : WACV: 1701 - 1710 [ DOI: 10.48550/arXiv.2310.03335 http://dx.doi.org/10.48550/arXiv.2310.03335 ]
相关文章
相关作者
相关机构