Abstract
Neuroimaging offers powerful evidence for the automated diagnosis of major depressive disorder (MDD). However, discrepancies across imaging modalities hinder the exploration of cross-modal interactions and the effective integration of complementary features. To address this challenge, we propose a supervised Deep Adaptive Fusion Network (DAFN) that fully leverages the complementarity of multimodal neuroimaging information for the diagnosis of MDD. Specifically, high- and low-frequency features are extracted from the images using a customized convolutional neural network and multi-head self-attention encoders, respectively. A modality weight adaptation module dynamically adjusts the contribution of each modality during training, while a progressive information reinforcement training strategy reinforces multimodal fusion features. Finally, the performance of the DAFN is evaluated on both the open-access dataset and the recruited dataset. The results demonstrate that DAFN achieves competitive performance in multimodal neuroimaging fusion for the diagnosis of MDD. The source code is available at: http://github.com/TTLi1996/DAFN.
| Original language | English |
|---|---|
| Article number | 108151 |
| Journal | Neural Networks |
| Volume | 194 |
| DOIs | |
| Publication status | Published - Feb 2026 |
| Externally published | Yes |
Keywords
- adaptive cross-modal information fusion
- Computer-aided diagnosis
- major depressive disorder
- multimodal neuroimaging