持续学习研究进展
A comprehensive survey of continual learning
- 2025年 页码:1-32
收稿日期:2024-10-31,
修回日期:2025-02-17,
录用日期:2025-02-25,
网络出版日期:2025-02-26
DOI: 10.11834/jig.240661
移动端阅览
浏览全部资源
扫码关注微信
收稿日期:2024-10-31,
修回日期:2025-02-17,
录用日期:2025-02-25,
网络出版日期:2025-02-26,
移动端阅览
持续学习(continual learning, CL)是机器学习领域的一个关键问题,旨在使模型在不断学习新任务的同时,避免灾难性遗忘,保持对先前任务的记忆。持续学习已在多个实际应用中扮演重要角色,如自动驾驶、机器人控制和医疗诊断系统等。本文旨在为学界提供持续学习领域的最新研究进展综述,并对未来可能的研究方向进行展望。为实现持续学习中新旧知识学习的“可塑性-稳定性”平衡,国内外研究者们提出了多种方法,根据方法的发展路径可以分为传统持续训练方法和基于预训练模型的方法。首先,本文介绍了传统持续训练的关键技术和方法,包括记忆重放法、正则化法和动态结构法。记忆重放法通过将先前任务的样本存储并重放,以帮助模型回忆过去的知识。正则化法则通过对模型参数的更新进行约束,防止新任务对旧任务的干扰。动态结构法通过调整模型结构或引入新的模型模块以应对新任务的挑战,避免灾难性遗忘的发生。接着,本文进一步探讨了基于预训练模型的持续学习方法的进展。随着大规模预训练模型的广泛应用,这类预训练模型展示了强大的泛化能力和知识迁移能力。基于预训练模型的持续学习方法可以分为基于微调和基于提示的方法。微调方法可以通过冻结部分预训练模型参数,仅对特定层进行更新,或者采用学习率调节等技术,避免对预训练模型的过度修改。基于提示的方法通过设计和输入提示来引导模型处理新任务,而无需大规模调整模型参数。本文提供的实验结果建议,当前持续学习任务应优先考虑采用基于预训练模型的方法。最后,本文对当前持续学习领域的挑战与未来发展方向进行了展望,重点讨论了各种实际约束条件下,如何结合预训练模型和经典持续学习方法,构建新的架构设计和优化策略,以应对日益复杂的现实任务需求。
Continual Learning (CL), also known as lifelong learning, is one of the most significant challenges in the field of machine learning. It refers to the capability of a system to learn new tasks while retaining knowledge from previously learned. CL has broad relevance in many real-world applications that require continual adaptation and learning such as autonomous driving, robotics and medical diagnosis. CL succumbs to the catastrophic forgetting issue, which occurs when a model overwrites or significantly diminishes previously learned information while training on new tasks, which is a prominent issue in traditional machine learning paradigms where models are trained on static datasets. The goal of CL is to develop models that exhibit both plasticity—allowing them to adapt to new information—and stability, ensuring that previous knowledge is not lost. Various strategies have been proposed in the literature to address this challenge, which this paper comprehensively reviews. Broadly, CL techniques can be classified into those based on continual training and those based on prompts. Methods based on continual training can be further divided into three main categories: replay-based methods, regularization-based methods, and dynamic architecture methods. Replay-based methods are among the most popular and well-established techniques in CL. These methods aim to mitigate catastrophic forgetting by storing and replaying data from previously learned tasks. By revisiting samples from prior tasks during training on new tasks, the model is effectively reminded of past knowledge, which helps maintain accuracy on earlier tasks. Replay-based methods can be further divided into two types, including replay memory construction and replay memory utilization. Replay memory construction focuses on how to save the replay memory, and methods can be categorized into three types depending on how knowledge is stored: storing raw data, building generative models, and storing data features to construct memory. Memory replay methods that store raw data directly save the original data from old tasks along with the corresponding labels. The raw data can come in various formats, such as images, videos, audio, or text. During training on a new task, the stored memory data from old tasks is trained together with the new task data, thereby helping to prevent forgetting. The advantage of storing raw data for memory replay is that it requires no additional operations for storage and utilization, and it maintains consistency with the original model's training process. The method of building generative models has been influenced by the recent development of generative models, where representative samples of past knowledge are generated for replay. These methods can be classified based on the type of generative model used. In continual learning, the strength of generative models lies in their ability to generate high-quality synthetic data, which helps the model overcome catastrophic forgetting issues common in traditional methods, thereby significantly enhancing the model's long-term memory capacity. The method of storing data features is chosen when data features provide a good representation of the original data. In situations where privacy protection and storage constraints are a concern, storing data features rather than raw data or additional models becomes a practical solution. Replay memory utilization focuses on how to effectively leverage the stored samples to enhance the effectiveness of replay. This includes techniques such as data augmentation, knowledge distillation, Bayesian methods, optimization and gradient projection, as well as representation alignment and bias correction. Regularization-based methods aim to address catastrophic forgetting by introducing constraints on the model's parameters, preventing them from undergoing drastic changes when learning new tasks. This approach enforces stability by preserving the critical aspects of previously learned knowledge through penalizing large deviations in the model’s parameters. Regularization methods can be categorized into those based on Laplace approximation, task representation constraints, Bayesian regularization, and knowledge distillation. The Laplace approximation technique approximates the posterior distribution of the old task as a Gaussian distribution, thereby imposing constraints on the important parameters of the old tasks. This type of regularization method works by adding penalty terms to the loss function to restrict changes to the parameters related to previous tasks when training on new tasks. In doing so, it achieves balanced learning of both new and old tasks, reducing the risk of catastrophic forgetting. Task representation constraint-based methods utilize the historical model to generate representations of current samples, applying regularization constraints based on these historical representations. The core idea of Bayesian regularization is to use a Bayesian framework to update model parameters, allowing the retention of knowledge from previous tasks while learning new ones. By regularizing model parameters within the Bayesian framework, this approach effectively balances the learning of new and old tasks, thereby reducing catastrophic forgetting. Knowledge distillation-based regularization methods, on the other hand, do not require additional storage of replay memories. Instead, they leverage stored historical models as teachers, using the current training samples as input to guide learning. Dynamic architecture methods address new tasks and knowledge by gradually adjusting the structure of the model. These methods dynamically modify the neural network's architecture or parameters based on changes in input data and the demands of new tasks, adding or reallocating network resources to learn new knowledge without forgetting the old. Dynamic architecture methods ensure continual learning by automatically expanding the network, activating important parameters, and freezing irrelevant parts, allowing the model to adapt to new tasks while avoiding catastrophic forgetting. Dynamic architecture methods can be further divided into multi-expert and subnetwork structures, dynamic scarification and masking techniques, dynamic structural adjustment, and the learning of additional task-related modules. In recent years, pre-trained large models, such as Transformer-based architectures, have shown remarkable success across various domains, thanks to their ability to generalize well across tasks. These models, often pre-trained on vast amounts of data, have strong representational power and are increasingly being applied in CL scenarios due to their ability to learn new tasks with minimal forgetting. Pre-trained models can be used in CL through two primary strategies, i.e., fine-tuning-based methods and prompt-based methods. Fine-tuning involves adapting a pre-trained model to a new task by updating some or all of its parameters. In the context of CL, fine-tuning can be performed by freezing certain layers of the pre-trained model and only updating specific parts, such as task-specific layers, to prevent the model from losing knowledge gained from earlier tasks. Another approach is to fine-tune the model using task-specific learning rates, ensuring that important parameters for previous tasks are modified less during the training of new tasks. Prompt-based methods represent a newer approach to leveraging pre-trained models for CL. Rather than adjusting the model’s internal parameters, prompt-based methods guide the model’s behavior by designing and inputting task-specific prompts. These prompts serve as auxiliary information that helps the model focus on the relevant aspects of the new task without altering the model’s underlying architecture or parameters. Despite the advancements in CL techniques, several challenges remain. One significant challenge is the scalability of CL methods, especially in real-world applications where the number of tasks and data diversity can be vast. Another challenge is achieving an optimal balance between plasticity and stability, as methods that overemphasize stability may limit the model's ability to learn effectively from new tasks. Looking forward, future research in CL is expected to focus on the integration of pre-trained large models with traditional CL techniques, exploring novel architectural designs and optimization strategies that can better address the demands of complex, real-world tasks. Additionally, more work is needed to develop CL methods that are computationally efficient, particularly for large-scale models, without sacrificing performance or flexibility. By combining the strengths of large models with classical CL techniques, the field is poised to make significant strides in creating intelligent systems capable of lifelong learning in dynamic and ever-changing environments.
Abati D , Tomczak J , Blankevoort T , Calderara S , Cucchiara R and Bejnordi B E . 2020 . Conditional channel gated networks for task-aware continual learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Virtual : IEEE: 3931 - 3940 [ DOI: 10.1109/cvpr42600.2020.00399 http://dx.doi.org/10.1109/cvpr42600.2020.00399 ]
Ahn H , Kwak J , Lim S , et al . Ss-il: Separated softmax for incremental learning // Proceedings of the IEEE/CVF International Conference on Computer Vision . Virtual : IEEE: 844 - 853 [ DOI: 10.1109/iccv48922.2021.00088 http://dx.doi.org/10.1109/iccv48922.2021.00088 ]
Aljundi R , Chakravarty P and Tuytelaars T . 2017 . Expert gate: Lifelong learning with a network of experts // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . Hawaii, USA : IEEE: 3366 - 3375 . [ DOI: 10.1109/cvpr.2017.753 http://dx.doi.org/10.1109/cvpr.2017.753 ]
Aljundi R , Babiloni F , Elhoseiny M , Rohrbach M and Tuytelaars T . 2018 . Memory aware synapses: Learning what (not) to forget // Proceedings of the European Conference on Computer Vision . Munich, Germany : Springer International Publishing: 139 - 154 [ DOI: 10.1007/978-3-030-01219-9_9 http://dx.doi.org/10.1007/978-3-030-01219-9_9 ]
Aljundi R , Belilovsky E , Tuytelaars , Charlin T L , Caccia M , Lin M and Page-Caccia L . 2019 . Online continual learning with maximal interfered retrieval // Advances in Neural Information Processing Systems . Vancouver, Canada : MIT Press .
Aljundi R , Kelchtermans K and Tuytelaars T . 2019 . Task-free continual learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Long Beach, USA : IEEE: 11254 - 11263 [ DOI: 10.1109/CVPR.2019.01151 http://dx.doi.org/10.1109/CVPR.2019.01151 ]
Alfarra M , Cai Z , Bibi A , Ghanem B and Müller M . 2024 . SimCS: Simulation for Domain Incremental Online Continual Segmentation // Proceedings of the AAAI Conference on Artificial Intelligence . Vancouver, Canada : AAAI: 38 ( 10 ): 10795 - 10803 [ DOI: 10.1609/aaai.v38i10.28952 http://dx.doi.org/10.1609/aaai.v38i10.28952 ]
Arani E , Sarfraz F and Zonooz B . 2022 . Learning fast, learning slow: A general continual learning method based on complementary learning system // International Conference on Learning Representations . Virtual : Openreview.net .
Bang J , Kim H , Yoo Y J , Ha J W and Choi J . 2021 . Rainbow memory: Continual learning with a memory of diverse samples // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Virtual : IEEE: 8218 - 8227 [ DOI: 10.1109/cvpr46437.2021.00812 http://dx.doi.org/10.1109/cvpr46437.2021.00812 ]
Bisaz R , Travaglia A and Alberini CM . 2014 . The neurobiological bases of memory formation: from physiological conditions to psychopathology . Psychopathology , 47 ( 6 ): 347 - 56 [ DOI: 10.1159/000363702 http://dx.doi.org/10.1159/000363702 ]
Bonato J , Pelosin F , Sabetta L , Nicolosi A . 2024 . Mind: multi-task incremental network distillation // Proceedings of the AAAI Conference on Artificial Intelligence . Vancouver, Canada : AAAI: 38 ( 10 ): 11105 - 11113 [ DOI: 10.1609/aaai.v38i10.28987 http://dx.doi.org/10.1609/aaai.v38i10.28987 ]
Boschini M , Bonicelli L , Buzzega P , Porrello A and Calderara S . 2022 . Class-incremental continual learning into the extended derverse . IEEE Transactions on Pattern Analysis and Machine Intelligence , 45 ( 5 ): 5497 - 5512 .
Boschini M , Bonicelli L , Porrello A , Bellitto G , Pennisi M , Palazzo S , Spampinato C and Calderara S . 2022 . Transfer without forgetting // Proceedings of the European Conference on Computer Vision . Tel Aviv, Israel : ECVA: 692 - 709 [ DOI: 10.1007/978-3-031-20050-2_40 http://dx.doi.org/10.1007/978-3-031-20050-2_40 ]
Bricken T , Davies X , Singh D , Krotov D and Kreiman G . 2022 . Sparse Distributed Memory is a Continual Learner // International Conference on Learning Representations . Kigali, Rwanda : Openreview.net .
Buzzega P , Boschini M , Porrello A , Abati D and Calderara S . 2020 . Dark experience for general continual learning : a strong, simple baseline// Advances in Neural Information Processing Systems . Virtual : MIT Press .
Caccia L , Aljundi R , Asadi N , Tuytelaars T , Pineau J and Belilovsky E . 2022 . New Insights on Reducing Abrupt Representation Change in Online Continual Learning // International Conference on Learning Representations . Virtual : Openreview.net .
Carr M F , Jadhav S P and Frank L M . 2011 . Hippocampal replay in the awake state: a potential substrate for memory consolidation and retrieval . Nature Neuroscience , 14 ( 2 ): 147 - 153 [ DOI: 10.1038/nn.2732 http://dx.doi.org/10.1038/nn.2732 ]
Castro F M , Marín-Jiménez M J , Guil N , Schmid , C and Alahari K . 2018 . End-to-end incremental learning // Proceedings of the European Conference on Computer Vision . Munich, Germany : Springer International Publishing: 233 - 248 [ DOI: 10.1007/978-3-030-01258-8_15 http://dx.doi.org/10.1007/978-3-030-01258-8_15 ]
Cha H , Lee J and Shin J . 2021 . Co2 l: Contrastive continual learning// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Virtual : IEEE: 9516 - 9525 [ DOI: 10.1109/iccv48922.2021.00938 http://dx.doi.org/10.1109/iccv48922.2021.00938 ]
Cha S , Hsu H , Hwang T , Calmon FP and Moon T . 2021 . CPR: classifier-projection regularization for continual learning // International Conference on Learning Representations . Virtual : Openreview.net .
Chaudhry A , Dokania P K , Ajanthan T and Torr P H S . 2018 . Riemannian walk for incremental learning: Understanding forgetting and intransigence // Proceedings of the European Conference on Computer Vision . Munich, Germany : Springer International Publishing: 532 - 547 [ DOI: 10.1007/978-3-030-01252-6_33 http://dx.doi.org/10.1007/978-3-030-01252-6_33 ]
Chaudhry A , Gordo A , Dokania P , Torr P and Lopez-Paz D . 2021 . Using hindsight to anchor past knowledge in continual learning // Proceedings of the AAAI Conference on Artificial Intelligence . Virtual : AAAI: 35 ( 8 ): 6993 - 7001 [ DOI: 10.1609/aaai.v35i8.16861 http://dx.doi.org/10.1609/aaai.v35i8.16861 ]
Chaudhry A , Ranzato M , Rohrbach M and Elhoseiny M . 2018 . Efficient lifelong learning with a-gem // International Conference on Learning Representations . Vancouver Canada : Openreview.net .
Churamani N , Kara O and Gunes H . 2022 . Domain-incremental continual learning for mitigating bias in facial expression and action unit recognition . IEEE Transactions on Affective Computing , 14 ( 4 ): 3191 – 3206 [ DOI: 10.1109/TAFFC.2022.3181033 http://dx.doi.org/10.1109/TAFFC.2022.3181033 ]
De Lange M , Aljundi R , Masana M , Parisot S , Jia X , Leonardis A , Slabaugh G and Tuytelaars T . 2022 . A continual learning survey: Defying forgetting in classification tasks . IEEE Transactions on Pattern Analysis and Machine Intelligence , 44 ( 7 ): 3366 - 3385 [ DOI: 10.1109/tpami.2021.3057446 http://dx.doi.org/10.1109/tpami.2021.3057446 ]
Derakhshani M M , Zhen X , Shao L and Snoek C . 2021 . Kernel continual learning // International Conference on Machine Learning . Virtual : PMLR: 2621 - 2631 .
Dhar P , Singh R V , Peng K C , Wu Z and Chellappa R . 2019 . Learning without memorizing // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Long Beach, USA : IEEE: 5138 - 5146 [ DOI: 10.1109/cvpr.2019.00528 http://dx.doi.org/10.1109/cvpr.2019.00528 ]
Douillard A , Cord M , Ollion C , Robtert T and Valle E . 2020 . Podnet: Pooled outputs distillation for small-tasks incremental learning // Proceedings of the European Conference on Computer Vision . Glasgow, UK : Springer International Publishing: 86 - 102 [ DOI: 10.1007/978-3-030-58565-5_6 http://dx.doi.org/10.1007/978-3-030-58565-5_6 ]
Douillard A , Ramé A , Couairon G and Cord G . 2022 . Dytox: Transformers for continual learning with dynamic token expansion // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . New Orleans, USA : IEEE: 9285 - 9295 [ DOI: 10.1109/cvpr52688.2022.00907 http://dx.doi.org/10.1109/cvpr52688.2022.00907 ]
Draelos T J , Miner N E , Lamb C C , Cox J A , Vineyard C M , Carison K D , Severa W M , James C D and Aimone J B . 2017 . Neurogenesis deep learning: Extending deep networks to accommodate new classes // Proceedings of the International Joint Conference on Neural Networks . Anchorage, USA : IEEE: 526 - 533 [ DOI: 10.1109/IJCNN.2017.7965898 http://dx.doi.org/10.1109/IJCNN.2017.7965898 ]
Ebrahimi S , Elhoseiny M , Darrell T and Rohrbach M . 2020 . Uncertainty-guided continual learning with bayesian neural networks // International Conference on Learning Representations . Virtual : Openreview.net .
Ebrahimi S , Meier F , Calandra R , Darrell T and Rohrbach M . 2020 . Adversarial continual learning // Proceedings of the European Conference on Computer Vision . Glasgow, UK : Springer International Publishing: 386 - 402 [ DOI: 10.1007/978-3-030-58621-8_23 http://dx.doi.org/10.1007/978-3-030-58621-8_23 ]
Gao R and Liu W . 2023 . Continual learning with deep diffusion-based generative replay // International Conference on Machine Learning . Hawaii, USA : PMLR: 10744 - 10763 .
Gao Z , Cen J and Chang X . 2024 . Consistent Prompting for Rehearsal-Free Continual Learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle, USA : IEEE: 28463 - 28473 [ DOI: 10.1109/cvpr52733.2024.02689 http://dx.doi.org/10.1109/cvpr52733.2024.02689 ]
Garg P , Saluja R , Balasubramanian V N , Arora C , Subramanian A and Jawahar C V . 2022 . Multi-domain incremental learning for semantic segmentation // Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision . Waikoloa, USA : IEEE: 761 – 771 [ DOI: 10.1109/WACV51458.2022.00214 http://dx.doi.org/10.1109/WACV51458.2022.00214 ]
Gurbuz M B and Dovrolis C . 2022 . NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks // International Conference on Machine Learning . Baltimore, USA : PMLR: 8157 - 8174 .
Gurbuz M B , Moorman J M and Dovrolis C . 2024 . Nice: neurogenesis inspired contextual encoding for replay-free class incremental learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle, USA : IEEE: 23659 - 23669 [ DOI: 10.1109/cvpr52733.2024.02233 http://dx.doi.org/10.1109/cvpr52733.2024.02233 ]
Han X , Zhang Z , Ding N , Gu Y , Liu X , Huo Y , Qiu J , Yao Y , Zhang A , Zhang L , Han W , Huang M , Jin Q , Lan Y , Liu Y , Liu Z , Lu Z , Qiu X , Song R , Tang J and Zhu J . 2021 . Pre-trained models: Past, present and future . AI Open , 2021, 2 : 225 - 250 [ DOI: 10.1016/j.aiopen.2021.08.002 http://dx.doi.org/10.1016/j.aiopen.2021.08.002 ]
Hao J , Ji K and Liu M . 2023 . Bilevel coreset selection in continual learning: A new formulation and algorithm // Advances in Neural Information Processing Systems . New Orleans, USA : MIT Press .
Hayes T L , Kafle K , Shrestha R , Acharya M and Kanan C . 2020 . Remind your neural network to prevent catastrophic forgetting // Proceedings of the European Conference on Computer Vision . Glasgow, UK : Springer International Publishing: 466 - 483 [ DOI: 10.1007/978-3-030-58598-3_28 http://dx.doi.org/10.1007/978-3-030-58598-3_28 ]
He J , Mao R , Shao Z and Zhu F . 2020 . Incremental learning in online scenario // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Virtual : IEEE: 13926 - 13935 [ DOI: 10.1109/cvpr42600.2020.01394 http://dx.doi.org/10.1109/cvpr42600.2020.01394 ]
He K , Zhang X , Ren S and Sun J . 2016 . Deep residual learning for image recognition // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . Las Vegas, NV, USA : IEEE: 770 - 778 [ DOI: 10.1109/CVPR.2016.90 http://dx.doi.org/10.1109/CVPR.2016.90 ]
Hinton G E and Plaut D C . 1987 . Using fast weights to deblur old memories // Proceedings of the Annual Conference of the Cognitive Science Society . Seattle, USA : Hillsdale: 177 – 186 .
Hu W , Lin Z , Liu B , Tao C , Tao Z , Zhao D , Ma J and Yan R . 2019 . Overcoming catastrophic forgetting for continual learning via model adaptation // International Conference on Learning Representations . New Orleans, USA : Openreview.net .
Hu W , Qin Q , Wang M , Ma J and Liu B . 2021 . Continual learning by using information of each class holistically // Proceedings of the AAAI Conference on Artificial Intelligence . Virtual : AAAI: , 35 ( 9 ): 7797 - 7805 [ DOI: 10.1609/aaai.v35i9.16952 http://dx.doi.org/10.1609/aaai.v35i9.16952 ]
Huang W C , Chen C F and Hsu H . 2024 . OVOR: OnePrompt with Virtual Outlier Regularization for Rehearsal-Free Class-Incremental Learning // International Conference on Learning Representations . Vienna, Austria : Openreview.net .
Hyder R , Shao K , Hou B , Markopoulos P , Prater-Bennette A and Asif M S . 2022 . Incremental task learning with incremental rank updates // Proceedings of the European Conference on Computer Vision . Tel Aviv, Israel : ECVA: 566 - 582 [ DOI: 10.1007/978-3-031-20050-2_33 http://dx.doi.org/10.1007/978-3-031-20050-2_33 ]
Jeon M , Lee H , Seong Y and Kang M . 2022 . Learning without prejudices: continual unbiased learning via benign and malignant forgetting // International Conference on Learning Representations . Virtual : Openreview.net .
Jiang L , Zheng Y , Chen C , Li G and Zhang W . 2023 . Review of optimization methods for supervised deep learning . Journal of Image and Graphics , 28 ( 04 ): 0963 - 0983
江铃燚 , 郑艺峰 , 陈澈 , 李国和 , 张文杰 . 2023 . 有监督深度学习的优化方法研究综述 . 中国图象图形学报 , 28 ( 04 ): 0963 - 0983 [ DOI: 10.11834/jig.211139 http://dx.doi.org/10.11834/jig.211139 ]
Jin H and Kim E . 2022 . Helpful or harmful: Inter-task association in continual learning // Proceedings of the European Conference on Computer Vision . Tel Aviv, Israel : ECVA: 519 - 535 [ DOI: 10.1007/978-3-030-58517-4_41 http://dx.doi.org/10.1007/978-3-030-58517-4_41 ]
Jin X , Sadhu A , Du J and Ren X . 2021 . Gradient-based editing of memory examples for online task-free continual learning // Advances in Neural Information Processing Systems . Virtual : MIT Press .
Kang H , Mina R J L , Madjid S R H , Yoon J , Hasegawa-Johnson M , Hwang S J and Yoo C D . 2022 . Forget-free continual learning with winning subnetworks // International Conference on Machine Learning . Baltimore, USA : PMLR: 10734 - 10750 .
Ke Z , Liu B and Huang X . 2020 . Continual learning of a mixed sequence of similar and dissimilar tasks // Advances in Neural Information Processing Systems . Virtual : MIT Press .
Kemker R and Kanan C . 2018 . Fearnet: Brain-inspired model for incremental learning // International Conference on Learning Representations . Vancouver Canada : Openreview.net .
Kim C D , Jeong J and Kim G . 2020 . Imbalanced continual learning with partitioning reservoir sampling // Proceedings of the European Conference on Computer Vision . Glasgow, UK : Springer International Publishing: 411 - 428 [ DOI: 10.1007/978-3-030-58601-0_25 http://dx.doi.org/10.1007/978-3-030-58601-0_25 ]
Kim J , Cho H , Kim J , Tiruneh YY and Baek S . 2024 . Sddgr: Stable diffusion-based deep generative replay for class incremental object detection // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle, USA : IEEE: 28772 - 28781 [ DOI: 10.1109/cvpr52733.2024.02718 http://dx.doi.org/10.1109/cvpr52733.2024.02718 ]
Kim T , Park J and Han B . 2024 . Cross-Class Feature Augmentation for Class Incremental Learning // Proceedings of the AAAI Conference on Artificial Intelligence . Vancouver, Canada : AAAI: 38 ( 12 ): 13168 - 13176 [ DOI: 10.1609/aaai.v38i12.29216 http://dx.doi.org/10.1609/aaai.v38i12.29216 ]
Khan M G Z A , Naeem M F , Van Gool L , Stricker D , Tombari F and Afzal M Z . 2023 . Introducing language guidance in prompt-based continual learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Vancouver, Canada : IEEE: 11463 - 11473 [ DOI: 10.1109/iccv51070.2023.01053 http://dx.doi.org/10.1109/iccv51070.2023.01053 ]
Khattak M U , Wasim S T , Naseer M , Khan S , Yang M H and Khan F S . 2023 . Self-regulating prompts: Foundational model adaptation without forgetting // Proceedings of the IEEE/CVF International Conference on Computer Vision . Paris, France : IEEE: 15190 - 15200 [ DOI: 10.1109/iccv51070.2023.01394 http://dx.doi.org/10.1109/iccv51070.2023.01394 ]
Konishi T , Kurokawa M , Ono C , Ke Z , Kim G and Liu B . 2023 . Parameter-level soft-masking for continual learning // International Conference on Machine Learning . Hawaii, USA : PMLR: 17492 - 17505 .
Kirkpatrick J , Pascanu R , Rabinowitz N , Veness J , Desjardins G , Rusu A A , Milan K , Quan J , Ramalho T , Grabska-Barwinska A , Hassabis D , Clopath C , Kumaran D and Hadsell R . 2017 . Overcoming catastrophic forgetting in neural networks . Proceedings of the National Academy of Sciences , 114 ( 13 ): 3521 - 3526 [ DOI: 10.1073/pnas.1611835114 http://dx.doi.org/10.1073/pnas.1611835114 ]
Kumar A , Chatterjee S and Rai P . 2021 . Bayesian structural adaptation for continual learning // International Conference on Machine Learning . Virtual : PMLR: 5850 - 5860 .
Kurniawan M R , Song X , Ma Z , He Y , Gong Y , Qi Y and Wei X . 2024 . Evolving Parameterized Prompt Memory for Continual Learning // Proceedings of the AAAI Conference on Artificial Intelligence . Vancouver, Canada : AAAI: 38 ( 12 ): 13301 - 13309 [DOI: 10.1609/aaai.v38i12.29231].
Le T T , Nguyen M , Nguyen T T , Van L N and Nguyen T H . 2024 . Continual relation extraction via sequential multi-task learning // Proceedings of the AAAI Conference on Artificial Intelligence . Vancouver, Canada : AAAI: 38 ( 16 ): 18444 - 18452 [ DOI: 10.1609/aaai.v38i16.29805 http://dx.doi.org/10.1609/aaai.v38i16.29805 ]
Lee D , Jung S and Moon T . 2024 . Continual Learning in the Presence of Spurious Correlations: Analyses and a Simple Baseline // International Conference on Learning Representations . Vienna, Austria : Openreview.net .
Lee K Y , Zhong Y and Wang Y X . 2023 . Do pre-trained models benefit equally in continual learning? // Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision . Hawaii, USA : IEEE: 6485 - 6493 [ DOI: 10.1109/wacv56688.2023.00642 http://dx.doi.org/10.1109/wacv56688.2023.00642 ]
Lee S , Goldt S and Saxe A . 2021 . Continual learning in the teacher-student setup: Impact of task similarity // International Conference on Machine Learning . Virtual : Openreview.net .
Li X , Zhou Y , Wu T , Socher R and Xiong C . 2019 . Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting // International Conference on Machine Learning . Long Beach, USA : PMLR: 3925 - 3934 .
Li Z and Hoiem D . 2017 . Learning without forgetting . IEEE Transactions on Pattern Analysis and Machine Intelligence , 2017, 40 ( 12 ): 2935 - 2947 [ DOI: 10.1109/TPAMI.2017.2773081 http://dx.doi.org/10.1109/TPAMI.2017.2773081 ]
Liang Y S and Li W J . 2024 . InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle, USA : IEEE: 23638 - 23647 [ DOI: 10.1109/cvpr52733.2024.02231 http://dx.doi.org/10.1109/cvpr52733.2024.02231 ]
Lin H , Zhang B , Feng S , Li X and Ye Y . 2023 . Pcr: Proxy-based contrastive replay for online class-incremental continual learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Vancouver, Canada : IEEE: 24246 - 24255 [ DOI: 10.1109/cvpr52729.2023.02322 http://dx.doi.org/10.1109/cvpr52729.2023.02322 ]
Liu A , Su Y , Wang L , Li B , Qian Z , Zhang W , Zhou L , Zhang X , Zhang Y , Huang J and Yu N . 2024 . Review on the progress of the AIGC visual content generation and traceability . Journal of Image and Graphics , 29 ( 06 ): 1535 - 1554
刘安安 , 苏育挺 , 王岚君 , 李斌 , 钱振兴 , 张卫明 , 周琳娜 , 张新鹏 , 张勇东 , 黄继武 , 俞能海 . 2024 . AIGC视觉内容生成与溯源研究进展 . 中国图象图形学报 , 29 ( 06 ): 1535 - 1554 [ DOI: 10.11834/jig.240003 http://dx.doi.org/10.11834/jig.240003 ]
Liu T Y and Soatto S . 2023 . Tangent model composition for ensembling and continual fine-tuning // Proceedings of the IEEE/CVF International Conference on Computer Vision . Paris, France : IEEE: 18676 - 18686 [ DOI: 10.1109/iccv51070.2023.01712 http://dx.doi.org/10.1109/iccv51070.2023.01712 ]
Liu Y , Parisot S , Slabaugh G , Jia X Leonardis A and Tuytelaars T . 2020 . More classifiers, less forgetting: A generic multi-classifier paradigm for incremental learning // Proceedings of the European Conference on Computer Vision . Glasgow, UK : Springer International Publishing: 699 - 716 [DOI: 10.1007/978-3-030-58574-7_42].
Liu Y , Schiele B and Sun Q . 2021 . Adaptive aggregation networks for class-incremental learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Virtual : IEEE: 2544 - 2553 [ DOI: 10.1109/cvpr46437.2021.00257 http://dx.doi.org/10.1109/cvpr46437.2021.00257 ]
Liu Y , Su Y , Liu A A , Schiele B and Sun Q . 2020 . Mnemonics training: Multi-class incremental learning without forgetting // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Virtual : IEEE: 12245 - 1225 [ DOI: 10.1109/cvpr42600.2020.01226 http://dx.doi.org/10.1109/cvpr42600.2020.01226 ]
Liu Z , Zhang J , Asadi K , Liu Y , Zhao D , Sabach S and Fakoor R . 2024 . TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models // International Conference on Learning Representations . Vienna, Austria : Openreview.net .
Lopez-Paz D and Ranzato M . 2017 . Gradient episodic memory for continual learning // Advances in Neural Information Processing Systems . Long Beach : MIT Press .
Luo Y , Zhao S , Wu H and Lu Z . 2024 . Dual-enhanced coreset selection with class-wise collaboration for online blurry class incremental learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle, USA : IEEE: 23995 - 24004 . [ DOI: 10.1109/cvpr52733.2024.02265 http://dx.doi.org/10.1109/cvpr52733.2024.02265 ]
Lyu F , Wang S , Feng W , Ye Z , Hu F and Wang S . 2021 . Multi-domain multi-task rehearsal for lifelong learning // Proceedings of the AAAI Conference on Artificial Intelligence . Virtual : AAAI: 35 ( 10 ): 8819 - 8827 [ DOI: 10.1609/aaai.v35i10.17068 http://dx.doi.org/10.1609/aaai.v35i10.17068 ]
Lyu Y , Wang L , Zhang X , Sun Z , Su H , Zhu J and Jing L . 2024 Overcoming recency bias of normalization statistics in continual learning: Balance and adaptation // Advances in Neural Information Processing Systems . New Orleans, USA : MIT Press .
Magistri S , Trinci T , Soutif-Cormerais A , van de Weijer J and Bagdanov A D . 2024 . Elastic feature consolidation for cold start exemplar-free incremental learning // International Conference on Learning Representations . Vienna, Austria : Openreview.net .
Mai Z , Li R W , Jeong J , Quispe D and Kim H , Sanner S . 2022 . Online continual learning in image classification: An empirical survey . Neurocomputing , 469 : 28 - 51 [ DOI: 10.1016/j.neucom.2021.10.021 http://dx.doi.org/10.1016/j.neucom.2021.10.021 ]
Mallya A , Davis D and Lazebnik S . 2018 . Piggyback: Adapting a single network to multiple tasks by learning to mask weights // Proceedings of the European Conference on Computer Vision . Munich, Germany : Springer International Publishing: 67 - 82 [ DOI: 10.1007/978-3-030-01225-0_5 http://dx.doi.org/10.1007/978-3-030-01225-0_5 ]
Mallya A and Lazebnik S . 2018 . Packnet: Adding multiple tasks to a single network by iterative pruning // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . Salt Lake City, USA : IEEE: 7765 - 7773 [DOI: 10.1109/cvpr.2018.00810].
Masana M , Liu X L , Twardowski B , Menta M , Bagdanov A D and van de Weijer J . 2022 . Class-incremental learning: Survey and performance evaluation on image classification . IEEE Transactions on Pattern Analysis and Machine Intelligence . 45 ( 5 ): 5513 - 5533 [ DOI: 10.1109/TPAMI.2022.3213473 http://dx.doi.org/10.1109/TPAMI.2022.3213473 ]
McCloskey M and Cohen N J . 1989 . Catastrophic interference in connectionist networks: the sequential learning problem . Psychology of Learning and Motivation , 24 : 109 – 165 [ DOI: 10.1016/S0079-7421(08)60536-8 http://dx.doi.org/10.1016/S0079-7421(08)60536-8 ]
Mermillod M , Bugaiska A and Bonin P . 2013 . The stability-plasticity dilemma: Investigating the continuum from catastrophic forgetting to age-limited learning effects . Frontiers in psychology , 4 : 54654 [ DOI: 10.3389/fpsyg.2013.00504 http://dx.doi.org/10.3389/fpsyg.2013.00504 ]
Mirza M J , Masana M , Possegger H and Bischof H . 2022 . An efficient domain-incremental learning approach to drive in all weather conditions // In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops . New Orleans, USA : IEEE: 3001 - 3011 [ DOI: 10.1109/CVPRW56347.2022.00339 http://dx.doi.org/10.1109/CVPRW56347.2022.00339 ]
Mo S , Pian W and Tian Y . 2023 . Class-incremental grouping network for continual audio-visual learning // Proceedings of the IEEE/CVF International Conference on Computer Vision . Paris, France : IEEE: 7788 - 7798 [ DOI: 10.1109/iccv51070.2023.00716 http://dx.doi.org/10.1109/iccv51070.2023.00716 ]
Nguyen C V , Li Y , Bui T D and Tumer R E . 2018 . Variational Continual Learning // International Conference on Learning Representations . Vancouver Canada : Openreview.net .
Ostapenko O , Puscas M , Klein T , Jahnichen P and Nabi M . 2019 . Learning to remember: A synaptic plasticity driven framework for continual learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Long Beach, USA : IEEE: 11321 - 11329 [DOI: 10.1109/CVPR.2019.01158].
Panos A , Kobe Y , Reino D O Aljundi R . 2023 . First session adaptation: A strong replay-free baseline for class-incremental learning // Proceedings of the IEEE/CVF International Conference on Computer Vision . Paris, France : IEEE: 18820 - 18830 [ DOI: 10.1109/iccv51070.2023.01725 http://dx.doi.org/10.1109/iccv51070.2023.01725 ]
Parisi G I , Kemker R , Part J L , Kanan C , Wermter S . 2019 . Continual lifelong learning with neural networks: A review . Neural Networks , 113 : 54 - 71 [ DOI: 10.1016/j.neunet.2019.01.012 http://dx.doi.org/10.1016/j.neunet.2019.01.012 ]
Pham Q , Liu C , Sahoo D and Steven H O I . 2021 . Contextual transformation networks for online continual learning // International Conference on Learning Representations . Virtual : Openreview.net .
Pourcel J , Vu N S and French R M . 2022 . Online task-free continual learning with dynamic sparse distributed memory // Proceedings of the European Conference on Computer Vision . Tel Aviv, Israel : ECVA: 739 - 756 [ DOI: 10.1007/978-3-031-19806-9_42 http://dx.doi.org/10.1007/978-3-031-19806-9_42 ]
Prabhu A , Torr P H S and Dokania P K . Gdumb: A simple approach that questions our progress in continual learning // Proceedings of the European Conference on Computer Vision . Glasgow, UK : Springer International Publishing: 524 - 540 [ DOI: 10.1007/978-3-030-58536-5_31 http://dx.doi.org/10.1007/978-3-030-58536-5_31 ]
Qiao J , Tan X , Chen C , Qu Y , Peng Y and Xie Y . 2024 . Prompt Gradient Projection for Continual Learning // International Conference on Learning Representations . Vienna, Austria : Openreview.net .
Qiu Z , Xu Y , Meng F , Li H , Xu L and Wu Q . 2024 . Dual-Consistency Model Inversion for Non-Exemplar Class Incremental Learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle, USA : IEEE: 24025 - 24035 [ DOI: 10.1109/cvpr52733.2024.02268 http://dx.doi.org/10.1109/cvpr52733.2024.02268 ]
Ramakrishnan K , Panda R , Fan Q , Henning J , Oliva A and Feris R . 2020 . Relationship matters: Relation guided knowledge transfer for incremental learning of object detectors // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Virtual : IEEE: 250 - 251 [ DOI: 10.1109/cvprw50498.2020.00133 http://dx.doi.org/10.1109/cvprw50498.2020.00133 ]
Ramesh R and Chaudhari P . 2022 . Model Zoo: A Growing Brain That Learns Continually // International Conference on Learning Representations . Virtual : Openreview.net .
Reber P J . 2013 . The neural basis of implicit learning and memory: A review of neuropsychological and neuroimaging research . Neuropsychologia , 51 ( 10 ): 2026 - 2042 [ DOI: 10.1016/j.neuropsychologia.2013.06.019 http://dx.doi.org/10.1016/j.neuropsychologia.2013.06.019 ]
Rebuffi S-A , Kolesnikov A , Sperl G and Lampert C H . 2017 . iCarL: Incremental classifier and representation learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Honolulu, Hawaii : IEEE: 2001 - 2010 [ DOI: 10.1109/CVPR.2017.587 http://dx.doi.org/10.1109/CVPR.2017.587 ]
Riemer M , Cases I , Ajemian R , Liu M , Rish I , Tu Y and Tesauro G . 2019 . Learning to learn without forgetting by maximizing transfer and minimizing interference // International Conference on Learning Representations . New Orleans, USA : Openreview.net .
Ritter H , Botev A and Barber D . 2018 . Online structured Laplace approximations for overcoming catastrophic forgetting // Advances in Neural Information Processing Systems . Montréal Canada : MIT Press .
Robins A . 1995 . Catastrophic forgetting, rehearsal and pseudorehearsal . Connection Science , 7 ( 2 ): 123 – 146 [ DOI: 10.1080/09540099550039318 http://dx.doi.org/10.1080/09540099550039318 ]
Rolnick D , Ahuja A , Schwarz J , Lillicrap T and Wayne G . 2019 . Experience replay for continual learning // Advances in Neural Information Processing Systems . Vancouver, Canada : MIT Press .
Roy A , Moulick R , Verma V K , Ghosh S and Das A . 2024 . Convolutional Prompting meets Language Models for Continual Learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle, USA : IEEE: 23616 - 23626 [ DOI: 10.1109/cvpr52733.2024.02229 http://dx.doi.org/10.1109/cvpr52733.2024.02229 ]
Roy A , Verma V K , Voonna S , Ghosh K , Ghosh S and Das A . 2023 . Exemplar-free continual transformer with convolutions // Proceedings of the IEEE/CVF International Conference on Computer Vision . Paris, France : IEEE: 5897 - 5907 [ DOI: 10.1109/iccv51070.2023.00542 http://dx.doi.org/10.1109/iccv51070.2023.00542 ]
Rudner T G J , Smith F B , Feng Q , The YW and Gal Y . 2022 . Continual learning via sequential function-space variational inference // International Conference on Machine Learning . Baltimore, USA : PMLR: 18871 - 18887 .
Rypeść G , Cygert S , Khan V , Trzciński T and Zieliński B . 2024 Divide and not forget: Ensemble of selectively trained experts in Continual Learning // International Conference on Learning Representations . Vienna, Austria : Openreview.net .
Sarfraz F , Arani E , Zonooz B . 2023 . Sparse coding in a dual memory system for lifelong learning // Proceedings of the AAAI Conference on Artificial Intelligence . Washington, DC, USA : AAAI: 37 ( 8 ): 9714 - 9722 [ DOI: 10.1609/aaai.v37i8.26161 http://dx.doi.org/10.1609/aaai.v37i8.26161 ]
Schulz D J . 2006 . Plasticity and stability in neuronal output via changes in intrinsic excitability: it's what's inside that counts . Journal of Experimental Biology , 209 ( 24 ): 4821 - 4827 [ DOI: 10.1242/jeb.02567 http://dx.doi.org/10.1242/jeb.02567 ]
Seo M , Koh H , Jeung W , Lee M , Kim S Lee H , Cho S , Choi S , Kim H and Choi J . 2024 . Learning Equi-angular Representations for Online Continual Learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle, USA : IEEE: 23933 - 23942 [ DOI: 10.1109/cvpr52733.2024.02259 http://dx.doi.org/10.1109/cvpr52733.2024.02259 ]
Shon H , Lee J , Kim S H and Kim J . 2022 . Dlcft: Deep linear continual fine-tuning for general incremental learning // Proceedings of the European Conference on Computer Vision . Tel Aviv, Israel : ECVA: 513 - 529 [ DOI: 10.1007/978-3-031-19827-4_30 http://dx.doi.org/10.1007/978-3-031-19827-4_30 ]
Shi W and Ye M . 2023 . Prototype reminiscence and augmented asymmetric knowledge aggregation for non-exemplar class-incremental learning // Proceedings of the IEEE/CVF International Conference on Computer Vision . Paris, France : IEEE: 1772 - 1781 [ DOI: 10.1109/iccv51070.2023.00170 http://dx.doi.org/10.1109/iccv51070.2023.00170 ]
Shim D , Mai Z , Jeong J , Sanner S , Kim H and Jang J . 2021 . Online class-incremental continual learning with adversarial shapley value // Proceedings of the AAAI Conference on Artificial Intelligence . Virtual : AAAI: 35 ( 11 ): 9630 - 9638 [ DOI: 10.1609/aaai.v35i11.17159 http://dx.doi.org/10.1609/aaai.v35i11.17159 ]
Shin H , Lee J K , Kim J and Kim J . 2017 . Continual learning with deep generative replay // Advances in Neural Information Processing Systems . Vancouver, Canada : MIT Press .
Smith J S , Karlinsky L , Gutta V , Cascante-Bonilla P , Kim D , Arbelle A , Panda R , Feris R and Kira Z . 2023 . Coda-prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Vancouver, Canada : IEEE: 11909 - 11919 [ DOI: 10.1109/cvpr52729.2023.01146 http://dx.doi.org/10.1109/cvpr52729.2023.01146 ]
Simon C , Koniusz P and Harandi M . 2021 . On learning the geodesic path for incremental learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Virtual : IEEE: 1591 - 1600 [ DOI: 10.1109/cvpr46437.2021.00164 http://dx.doi.org/10.1109/cvpr46437.2021.00164 ]
Singh P , Verma V K , Mazumder P , Carin L and Rai P . 2020 . Calibrating cnns for lifelong learning // Advances in Neural Information Processing Systems . Virtual : MIT Press .
Song G and Tan X . 2021 . Real-world cross-modal retrieval via sequential learning. IEEE Transactions on Multimedia . IEEE : 23 : 1708 - 1721 [ DOI: 10.1109/TMM.2020.3002177 http://dx.doi.org/10.1109/TMM.2020.3002177 ]
Sun G , Liang W , Dong J , Li J , Ding Z and Cong Y . 2024 . Create your world: Lifelong text-to-image diffusion . IEEE Transactions on Pattern Analysis and Machine Intelligence , 46 ( 9 ): 6454 – 6470 [ DOI: 10.1109/TPAMI.2024.3382753 http://dx.doi.org/10.1109/TPAMI.2024.3382753 ]
Sun Q , Lyu F , Shang F , Feng W and Wan Liang . 2022 . Exploring example influence in continual learning // Advances in Neural Information Processing Systems . New Orleans, USA : MIT Press: 27075 - 27086 .
Sun S , Calandriello D , Hu H , Li A and Titsias M . 2022 . Information-theoretic Online Memory Selection for Continual Learning // International Conference on Learning Representations . Virtual : Openreview.net .
Sun W , Li Q , Zhang J , Wang W and Geng Y A . 2023 . Decoupling learning and remembering: A bilevel memory framework with knowledge projection for task-incremental learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Vancouver, BC, Canada : IEEE: 20186 - 20195 [ DOI: 10.1109/CVPR52729.2023.01933 http://dx.doi.org/10.1109/CVPR52729.2023.01933 ]
Sun Z , Mu Y and Hua G . 2023 . Regularizing second-order influences for continual learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Vancouver, Canada : IEEE: 20166 - 20175 [ DOI: 10.1109/cvpr52729.2023.01931 http://dx.doi.org/10.1109/cvpr52729.2023.01931 ]
Tan Y , Zhou Q , Xiang X , Wang K , Wu Y and Li Y . 2024 . Semantically-Shifted Incremental Adapter-Tuning is A Continual ViTransformer // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle, USA : IEEE: 23252 - 23262 [ DOI: 10.1109/cvpr52733.2024.0219410.1109/cvpr52733.2024.02194 http://dx.doi.org/10.1109/cvpr52733.2024.0219410.1109/cvpr52733.2024.02194 ]
Tang Y M , Peng Y X and Zheng W S . 2022 . Learning to imagine: Diversify memory for incremental learning using unlabeled data // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . New Orleans, USA : IEEE: 9549 - 9558 [ DOI: 10.1109/cvpr52688.2022.00020 http://dx.doi.org/10.1109/cvpr52688.2022.00020 ]
Tang Y M , Peng Y X , Zheng W S . 2023 . When prompt-based incremental learning does not meet strong pretraining // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Vancouver, Canada : IEEE: 1706 - 1716 [DOI: 10.1109/iccv51070.2023.00164].
Tiwari R , Killamsetty K , Iyer R and Shenoy P . 2022 . Gcr: Gradient coreset based replay buffer selection for continual learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . New Orleans, USA : IEEE: 99 - 108 [ DOI: 10.1109/cvpr52688.2022.00933 http://dx.doi.org/10.1109/cvpr52688.2022.00933 ]
Valkov L , Srivastava A , Chaudhuri S and Sutton C . 2024 . A Probabilistic Framework for Modular Continual Learning // International Conference on Learning Representations . Vienna, Austria : Openreview.net .
Varshney V , Patidar M , Kumar R , Vig L and Shroff G . 2022 . Prompt augmented generative replay via supervised contrastive training for lifelong intent detection// Annual Conference of the North American Chapter of the Association for Computational Linguistics Seattle USA : ACL: 1113 - 1127 [ DOI: 10.18653/v1/2022.findings-naacl.84 http://dx.doi.org/10.18653/v1/2022.findings-naacl.84 ]
Vaswani A , Shazeer N , Parmar N , Uszkoreit J , Jones L , Gomez A Z , Kaiser L and Illia Polosukhin . 2017 . Attention is all you need // Advances in Neural Information Processing Systems . Long Beach : MIT Press .
Veniat T , Denoyer L and Ranzato M A . 2021 . Efficient continual learning with modular networks and task-driven priors // International Conference on Learning Representations . Virtual : Openreview.net .
Vijayan P , Bhat P , Zonooz B and Arani E . 2023 . TriRE: a multi-mechanism learning paradigm for continual knowledge retention and promotion // Advances in Neural Information Processing Systems . New Orleans, USA : MIT Press .
Wang F Y , Zhou D W , Liu L , Ye H J , Bian Y , Zhan D C and Zhao P . 2023 . Beef: Bi-compatible class-incremental learning via energy-based expansion and fusion // International Conference on Learning Representations . Kigali, Rwanda : Openreview.net .
Wang J , Dong D , Shou L , Chen K and Chen G . 2023 . Effective continual learning for text classification with lightweight snapshots // Proceedings of the AAAI Conference on Artificial Intelligence . Washington, DC, USA : AAAI: 37 ( 8 ): 10122 - 10130 [ DOI: 10.1609/aaai.v37i8.26206 http://dx.doi.org/10.1609/aaai.v37i8.26206 ]
Wang L , Xie J , Zhang X , Huang M , Su H and Zhu J . 2023 . Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality // Advances in Neural Information Processing Systems . New Orleans, USA : MIT Press .
Wang L , Zhang X , Yang K , Yu L , Li C , Hong L , Zhang S , Li Z , Zhong Y and Zhu J . 2022 . Memory replay with data compression for continual learning // International Conference on Learning Representations . Virtual : Openreview.net .
Wang W , Zhang J , Li Q , Hwang M Y , Zong C and Li Z . 2019 . Incremental Learning from Scratch for Task-Oriented Dialogue Systems // Proceedings of the Annual Meeting of the Association for Computational Linguistics . Florence, Italy : ACL: 3710 - 3720 [ DOI: 10.18653/v1/p19-1361 http://dx.doi.org/10.18653/v1/p19-1361 ]
Wang Y , Huang Z and Hong X . 2022 . S-prompts learning with pre-trained transformers: An Occam’s razor for domain incremental learning // Advances in Neural Information Processing Systems . Virtual : MIT Press .
Wang Z , Liu L , Duan Y and Tao D . 2022 . Continual learning through retrieval and imagination // Proceedings of the AAAI Conference on Artificial Intelligence . Vancouver, Canada : AAAI: 36 ( 8 ): 8594 - 8602 [ DOI: 10.1609/aaai.v36i8.20837 http://dx.doi.org/10.1609/aaai.v36i8.20837 ]
Wang Z , Zhang Z , Ebrahimi S , Sun R , Zhang H , Lee C Y , Ren X , Su G , Perot V , Dy J and Pfister T . 2022 . Dualprompt: Complementary prompting for rehearsal-free continual learning // Proceedings of the European Conference on Computer Vision . Tel Aviv, Israel : ECVA: 631 - 648 [ DOI: 10.1007/978-3-031-19809-0_36 http://dx.doi.org/10.1007/978-3-031-19809-0_36 ]
Wang Z , Zhang Z , Lee C Y , Zhang H , Sun R , Ren X , Su G , Perot V Dy J and Pfister T . 2022 . Learning to prompt for continual learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . New Orleans, USA : IEEE: 139 - 149 [ DOI: 10.1109/cvpr52688.2022.00024 http://dx.doi.org/10.1109/cvpr52688.2022.00024 ]
Wołczyk M , Piczak K , Wójcik B , Pustelnik L , Morawiecki P , Tabor J , Trzcinski T and Spurek P . 2022 . Continual learning with guarantees via weight interval constraints // International Conference on Machine Learning . Baltimore, USA : PMLR: 23897 - 23911 .
Wu Y , Chen Y , Wang L , Ye Y , Liu Z , Guo Y and Fu Y . 2019 . Large scale incremental learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Long Beach, USA : IEEE: 374 - 382 [ DOI: 10.1109/cvpr.2019.00046 http://dx.doi.org/10.1109/cvpr.2019.00046 ]
Xue M , Zhang H , Song J and Song M . 2022 . Meta-attention for vit-backed continual learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . New Orleans, USA : IEEE: 150 - 159 [ DOI: 10.1109/cvpr52688.2022.00025 http://dx.doi.org/10.1109/cvpr52688.2022.00025 ]
Yan H , Wang L , Ma K and Zhong Y . 2024 . Orchestrate latent expertise: advancing online continual learning with multi-level supervision and reverse self-distillation // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle, USA : IEEE: 23670 - 23680 [ DOI: 10.1109/cvpr52733.2024.02234 http://dx.doi.org/10.1109/cvpr52733.2024.02234 ]
Yan Q , Gong D , Liu Y , Hengel A and Shi J Q . 2022 . Learning bayesian sparse networks with full experience replay for continual learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . New Orleans, USA : IEEE: 109 - 118 [ DOI: 10.1109/cvpr52688.2022.00021 http://dx.doi.org/10.1109/cvpr52688.2022.00021 ]
Yan S , Xie J and He X . 2021 . DER: Dynamically expandable representation for class incremental learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Virtual : IEEE: 3014 - 3023 [ DOI: 10.1109/CVPR46437.2021.00303 http://dx.doi.org/10.1109/CVPR46437.2021.00303 ]
Yang E , Shen L , Wang Z , Liu T and Guo G . 2023 . An efficient dataset condensation plugin and its application to continual learning // Advances in Neural Information Processing Systems . New Orleans, USA : MIT Press .
Ye F and Bors A G . 2024 . Online Task-Free Continual Generative and Discriminative Learning via Dynamic Cluster Memory // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle, USA : IEEE: 26202 - 26212 [ DOI: 10.1109/cvpr52733.2024.02476 http://dx.doi.org/10.1109/cvpr52733.2024.02476 ]
Yin H and Li P . 2021 . Mitigating forgetting in online continual learning with neuron calibration // Advances in Neural Information Processing Systems . Virtual : MIT Press .
Yoon J , Yang E , Lee J and Hwang S J . 2018 . Lifelong Learning with Dynamically Expandable Networks // International Conference on Learning Representations . Vancouver Canada : Openreview.net .
Zenke F , Poole B and Ganguli S . 2017 . Continual learning through synaptic intelligence // International Conference on Machine Learning . Sydney, Australia : PMLR: 3987 - 3995 .
Zhai M , Chen L and Mori G . Hyper-lifelonggan: Scalable lifelong learning for image conditioned generation // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Virtual : IEEE: 2246 - 2255 [ DOI: 10.1109/cvpr46437.2021.00228 http://dx.doi.org/10.1109/cvpr46437.2021.00228 ]
Zhang G , Wang L , Kang G , Chen L and Wei Y . 2023 . Slca: Slow learner with classifier alignment for continual learning on a pre-trained model // Proceedings of the IEEE/CVF International Conference on Computer Vision . Paris, France : IEEE: 19148 - 19158 [ DOI: 10.1109/iccv51070.2023.01754 http://dx.doi.org/10.1109/iccv51070.2023.01754 ]
Zhang J , Zhang J , Ghosh S , Li D , Tasci S , Heck L and Zhang H . 2020 . Class-incremental learning via deep model consolidation // Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision . Snowmass Village, USA : IEEE: 1131 - 1140 [ DOI: 10.1109/WACV45572.2020.9093365 http://dx.doi.org/10.1109/WACV45572.2020.9093365 ]
Zhang X , Zhang F , Xu C . 2023 . Vqacl: A novel visual question answering continual learning setting // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Vancouver, Canada : IEEE: 19102 - 19112 [ DOI: 10.1109/cvpr52729.2023.01831 http://dx.doi.org/10.1109/cvpr52729.2023.01831 ]
Zhang Y , Pfahringer B , Frank E , Bifet A , Lim N J S and Jia Y . 2022 . A simple but strong baseline for online continual learning: Repeated augmented rehearsal // Advances in Neural Information Processing Systems . Virtual : MIT Press .
Zhang H , Wang T , Li M , Zhao Z , Pu S and Wu F . 2022 . Class Incremental Learning: A Review and Performance Evaluation . Journal of image and Graphics , 27 ( 9 ): 2652 - 2682
张浩宇 , 王天保 , 李孟择 , 赵洲 , 浦世亮 , 吴飞 . 2022 . 视觉语言多模态预训练综述 . 中国图象图形学报 , 27 ( 9 ): 2652 - 2682
Zhao B , Xiao X , Gan G , Zhang B and Xiao S-T . 2020 . Maintaining discrimination and fairness in class-incremental learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle, USA : IEEE: 13208 - 13217 [ DOI: 10.1109/CVPR42600.2020.01322 http://dx.doi.org/10.1109/CVPR42600.2020.01322 ]
Zhao H , Zhou T , Long G , Jiang J and Zhang C . 2023 . Does continual learning equally forget all parameters? // International Conference on Machine Learning . Hawaii, USA : PMLR: 42280 - 42303 .
Zhou D W , Wang F Y , Ye H J and Zhan D C . 2023 . Deep learning for class-incremental learning: A survey . Chinese Journal of Computers , 46 ( 8 ): 1577 - 1605
周大蔚 , 汪福运 , 叶翰嘉 , 詹德川 . 2023 . 基于深度学习的类别增量学习算法综述 . 计算机学报 , 46 ( 8 ): 1577 - 1605 [ DOI: 10.11897/SP.J.1016.2023.01577 http://dx.doi.org/10.11897/SP.J.1016.2023.01577 ]
Zhu F , Zhang X and Liu C . 2023 . Class Incremental Learning: A Review and Performance Evaluation . Acta Automatica Sinica , 49 ( 3 ): 635 - 660
朱飞 , 张煦尧 , 刘成林 . 2023 . 类别增量学习研究进展和性能评价 . 自动化学报 , 49 ( 3 ): 635 - 660 [ DOI: 10.16383/j.aas.c220588 http://dx.doi.org/10.16383/j.aas.c220588 ]
Zhu K , Zhai W , Cao Y , Luo J and Zha Z . 2022 . Self-sustaining representation expansion for non-exemplar class-incremental learning // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . New Orleans, USA : IEEE: 9296 - 9305 [ DOI: 10.1109/cvpr52688.2022.00908 http://dx.doi.org/10.1109/cvpr52688.2022.00908 ]
相关作者
相关机构