TY - JOUR
T1 - DC-LoRA
T2 - Domain correlation low-rank adaptation for domain incremental learning
AU - Li, Lin
AU - Wang, Shiye
AU - Li, Changsheng
AU - Yuan, Ye
AU - Wang, Guoren
N1 - Publisher Copyright:
© 2025 The Author(s)
PY - 2025/12
Y1 - 2025/12
N2 - Continual learning, characterized by the sequential acquisition of multiple tasks, has emerged as a prominent challenge in deep learning. During the process of continual learning, deep neural networks experience a phenomenon known as catastrophic forgetting, wherein networks lose the acquired knowledge related to previous tasks when training on new tasks. Recently, parameter-efficient fine-tuning (PEFT) methods have gained prominence in tackling the challenge of catastrophic forgetting. However, within the realm of domain incremental learning, a type characteristic of continual learning, there exists an additional overlooked inductive bias, which warrants attention beyond existing approaches. In this paper, we propose a novel PEFT method called Domain Correlation Low-Rank Adaptation for domain incremental learning. Our approach put forward a domain correlated loss, which encourages the weights of the LoRA module for adjacent tasks to become more similar, thereby leveraging the correlation between different task domains. Furthermore, we consolidate the classifiers of different task domains to improve prediction performance by capitalizing on the knowledge acquired from diverse tasks. To validate the effectiveness of our method, we conduct comparative experiments and ablation studies on publicly available domain incremental learning benchmark dataset. The experimental results demonstrate that our method outperforms state-of-the-art approaches.
AB - Continual learning, characterized by the sequential acquisition of multiple tasks, has emerged as a prominent challenge in deep learning. During the process of continual learning, deep neural networks experience a phenomenon known as catastrophic forgetting, wherein networks lose the acquired knowledge related to previous tasks when training on new tasks. Recently, parameter-efficient fine-tuning (PEFT) methods have gained prominence in tackling the challenge of catastrophic forgetting. However, within the realm of domain incremental learning, a type characteristic of continual learning, there exists an additional overlooked inductive bias, which warrants attention beyond existing approaches. In this paper, we propose a novel PEFT method called Domain Correlation Low-Rank Adaptation for domain incremental learning. Our approach put forward a domain correlated loss, which encourages the weights of the LoRA module for adjacent tasks to become more similar, thereby leveraging the correlation between different task domains. Furthermore, we consolidate the classifiers of different task domains to improve prediction performance by capitalizing on the knowledge acquired from diverse tasks. To validate the effectiveness of our method, we conduct comparative experiments and ablation studies on publicly available domain incremental learning benchmark dataset. The experimental results demonstrate that our method outperforms state-of-the-art approaches.
KW - Continual learning
KW - Domain correlation
KW - Domain incremental learning
KW - Parameter- efficient fine-tuning
UR - http://www.scopus.com/pages/publications/105015816762
U2 - 10.1016/j.hcc.2024.100270
DO - 10.1016/j.hcc.2024.100270
M3 - Article
AN - SCOPUS:105015816762
SN - 2667-2952
VL - 5
JO - High-Confidence Computing
JF - High-Confidence Computing
IS - 4
M1 - 100270
ER -