Deep adaptive fusion network with multimodal neuroimaging information for MDD diagnosis: an open data study

Tongtong Li, Kai Li, Ziyang Zhao, Qi Sun, Xinyan Zhang, Zhijun Yao*, Jiansong Zhou, Bin Hu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Neuroimaging offers powerful evidence for the automated diagnosis of major depressive disorder (MDD). However, discrepancies across imaging modalities hinder the exploration of cross-modal interactions and the effective integration of complementary features. To address this challenge, we propose a supervised Deep Adaptive Fusion Network (DAFN) that fully leverages the complementarity of multimodal neuroimaging information for the diagnosis of MDD. Specifically, high- and low-frequency features are extracted from the images using a customized convolutional neural network and multi-head self-attention encoders, respectively. A modality weight adaptation module dynamically adjusts the contribution of each modality during training, while a progressive information reinforcement training strategy reinforces multimodal fusion features. Finally, the performance of the DAFN is evaluated on both the open-access dataset and the recruited dataset. The results demonstrate that DAFN achieves competitive performance in multimodal neuroimaging fusion for the diagnosis of MDD. The source code is available at: http://github.com/TTLi1996/DAFN.

Original languageEnglish
Article number108151
JournalNeural Networks
Volume194
DOIs
Publication statusPublished - Feb 2026
Externally publishedYes

Keywords

  • adaptive cross-modal information fusion
  • Computer-aided diagnosis
  • major depressive disorder
  • multimodal neuroimaging

Fingerprint

Dive into the research topics of 'Deep adaptive fusion network with multimodal neuroimaging information for MDD diagnosis: an open data study'. Together they form a unique fingerprint.

Cite this