Advanced Search
Volume 47 Issue 6
Jun.  2025
Turn off MathJax
Article Contents
SONG Xiaoying, HAO Chunyu, CHAI Li. Multi-Resolution Spatio-Temporal Fusion Graph Convolutional Network for Attention Deficit Hyperactivity Disorder Classification[J]. Journal of Electronics & Information Technology, 2025, 47(6): 1927-1936. doi: 10.11999/JEIT240872
Citation: SONG Xiaoying, HAO Chunyu, CHAI Li. Multi-Resolution Spatio-Temporal Fusion Graph Convolutional Network for Attention Deficit Hyperactivity Disorder Classification[J]. Journal of Electronics & Information Technology, 2025, 47(6): 1927-1936. doi: 10.11999/JEIT240872

Multi-Resolution Spatio-Temporal Fusion Graph Convolutional Network for Attention Deficit Hyperactivity Disorder Classification

doi: 10.11999/JEIT240872 cstr: 32379.14.JEIT240872
Funds:  The National Natural Science Foundation of China (62176192, 62173259, U2441244), The Natural Science Foundation of Zhejiang Province (LZ24F030006)
  • Received Date: 2024-10-15
  • Rev Recd Date: 2025-05-07
  • Available Online: 2025-05-22
  • Publish Date: 2025-06-30
  •   Objective  Predicting neurodevelopmental disorders remains a central challenge in neuroscience and artificial intelligence. Attention Deficit Hyperactivity Disorder (ADHD), a representative complex brain disorder, presents diagnostic difficulties due to its increasing prevalence, clinical heterogeneity, and reliance on subjective criteria, which impede early and accurate detection. Developing objective, data-driven classification models is therefore of significant clinical relevance. Existing graph convolutional network-based approaches for functional brain network analysis are constrained by several limitations. Most adopt single-resolution brain parcellation schemes, reducing their capacity to capture complementary features from multi-resolution functional Magnetic Resonance Imaging (fMRI) data. Moreover, the lack of effective cross-scale feature fusion restricts the integration of essential features across resolutions, hampering the modeling of hierarchical dependencies among brain regions. To address these limitations, this study proposes a Multi-resolution Spatio-Temporal Fusion Graph Convolutional Network (MSTF-GCN), which integrates spatiotemporal features across multiple fMRI resolutions. The proposed method substantially improves the accuracy and robustness of functional brain network classification for ADHD.  Methods  The MSTF-GCN improves learning performance through two main components: (1) construction of multi-resolution, multi-channel networks, and (2) comprehensive fusion of temporal and spatial information. Multiple brain atlases at different resolutions are employed to parcellate the brain and generate functional connectivity networks. Spatial features are extracted from these networks, and optimal nodal features are selected using Support Vector Machine-Recursive Feature Elimination (SVM-RFE). To preserve global temporal characteristics and capture hierarchical signal variations, both the original time series and their differential signals are processed using a temporal convolutional network. This structure enables the extraction of complex temporal features and inter-subject temporal correlations. Spatial features from different resolutions are then fused with temporal correlations to form population graphs, which are adaptively integrated via a multi-channel graph convolutional network. Non-imaging data are also integrated to produce effective multi-channel, multi-modal spatiotemporal fusion features. The final classification is performed using a fully connected layer.  Results and Discussions  The proposed MSTF-GCN model is evaluated for ADHD classification using two independent sites from the ADHD-200 dataset: Peking and NI. The model consistently outperforms existing methods, achieving classification accuracies of 75.92% at the Peking site and 82.95% at the NI site (Table 2, Table 3). Ablation studies confirm the contributions of two key components: (1) The multi-atlas, multi-resolution feature extraction strategy significantly enhances classification accuracy (Table 4), supporting the utility of complementary cross-scale topological information; (2) The multimodal fusion strategy, which incorporates non-imaging variables (gender and age), yields notable performance improvements (Table 5). Furthermore, t-SNE visualization and inter-class distance analysis (Fig. 6) show that MSTF-GCN generates a feature space with clearer class separation, reflecting the effectiveness of its multi-channel spatiotemporal fusion design. Overall, the MSTF-GCN model achieves superior performance compared with state-of-the-art methods and demonstrates strong robustness across sites, offering a promising tool for auxiliary diagnosis of brain disorders.  Conclusions  This study proposes a novel multi-channel graph embedding framework that integrates spatial topological and temporal features derived from multi-resolution fMRI data, leading to marked improvements in classification performance. Experimental results show that the MSTF-GCN method exceeds current state-of-the-art algorithms, with accuracy gains of 3.92% and 8.98% on the Peking and NI sites, respectively. These findings confirm its strong performance and cross-site robustness in ADHD classification. Future work will focus on constructing more expressive hypergraph neural networks to capture higher-order relationships within functional brain networks.
  • loading
  • [1]
    SHARMA A and COUTURE J. A review of the pathophysiology, etiology, and treatment of attention-deficit hyperactivity disorder (ADHD)[J]. Annals of Pharmacotherapy, 2014, 48(2): 209–225. doi: 10.1177/1060028013510699.
    [2]
    杨健, 苗硕. 注意缺陷多动障碍患儿认知功能检测方法的进展[J]. 北京医学, 2015, 37(6): 507–508. doi: 10.15932/j.0253-9713.2015.6.001.

    YANG Jian and MIAO Shuo. Advances in detection methods of cognitive function in children with attention deficit hyperactivity disorder[J]. Beijing Medical Journal, 2015, 37(6): 507–508. doi: 10.15932/j.0253-9713.2015.6.001.
    [3]
    American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders[M]. 5th ed. Washington: American Psychiatric Publishing, 2013: 59–65. doi: 10.1176/appi.books.9780890425596.
    [4]
    ARBABSHIRANI M R, PLIS S, SUI Jing, et al. Single subject prediction of brain disorders in neuroimaging: Promises and pitfalls[J]. NeuroImage, 2017, 145: 137–165. doi: 10.1016/j.neuroimage.2016.02.079.
    [5]
    GRIMM O, VAN ROOIJ D, HOOGMAN M, et al. Transdiagnostic neuroimaging of reward system phenotypes in ADHD and comorbid disorders[J]. Neuroscience & Biobehavioral Reviews, 2021, 128: 165–181. doi: 10.1016/J.NEUBIOREV.2021.06.025.
    [6]
    JIE Biao, LIU Mingxia, and SHEN Dinggang. Integration of temporal and spatial properties of dynamic connectivity networks for automatic diagnosis of brain disease[J]. Medical Image Analysis, 2018, 47: 81–94. doi: 10.1016/j.media.2018.03.013.
    [7]
    杨昆, 常世龙, 王尉丞, 等. 基于sECANet通道注意力机制的肾透明细胞癌病理图像ISUP分级预测[J]. 电子与信息学报, 2022, 44(1): 138–148. doi: 10.11999/JEIT210900.

    YANG Kun, CHANG Shilong, WANG Yucheng, et al. Predict the ISUP grade of clear cell renal cell carcinoma using pathological images based on sECANet chanel attention[J]. Journal of Electronics & Information Technology, 2022, 44(1): 138–148. doi: 10.11999/JEIT210900.
    [8]
    金怀平, 薛飞跃, 李振辉, 等. 基于病理图像集成深度学习的胃癌预后预测方法[J]. 电子与信息学报, 2023, 45(7): 2623–2633. doi: 10.11999/JEIT220655.

    JIN Huaiping, XUE Feiyue, LI Zhenhui, et al. Prognostic prediction of gastric cancer based on ensemble deep learning of pathological images[J]. Journal of Electronics & Information Technology, 2023, 45(7): 2623–2633. doi: 10.11999/JEIT220655.
    [9]
    PARISOT S, KTENA S I, FERRANTE E, et al. Spectral graph convolutions for population-based disease prediction[C]. The 20th International Conference on Medical Image Computing and Computer Assisted Intervention, Quebec City, Canada, 2017: 177–185. doi: 10.1007/978-3-319-66179-7_21.
    [10]
    KAZI A, SHEKARFOROUSH S, ARVIND KRISHNA S, et al. InceptionGCN: Receptive field aware graph convolutional network for disease prediction[C]. The 26th International Conference on Information Processing in Medical Imaging, Hong Kong, China, 2019: 73–85. doi: 10.1007/978-3-030-20351-1_6.
    [11]
    JIANG Hao, CAO Peng, XU MingYi, et al. Hi-GCN: A hierarchical graph convolution network for graph embedding learning of brain network and brain disorders prediction[J]. Computers in Biology and Medicine, 2020, 127: 104096. doi: 10.1016/j.compbiomed.2020.104096.
    [12]
    LI Lanting, JIANG Hao, WEN Guangqi, et al. TE-HI-GCN: An ensemble of transfer hierarchical graph convolutional networks for disorder diagnosis[J]. Neuroinformatics, 2022, 20(2): 353–375. doi: 10.1007/S12021-021-09548-1.
    [13]
    HUANG Yongxiang and CHUNG A C S. Disease prediction with edge-variational graph convolutional networks[J]. Medical Image Analysis, 2022, 77: 102375. doi: 10.1016/J.MEDIA.2022.102375.
    [14]
    PARK K W and CHO S B. A residual graph convolutional network with spatio-temporal features for autism classification from fMRI brain images[J]. Applied Soft Computing, 2023, 142: 110363. doi: 10.1016/j.asoc.2023.110363.
    [15]
    LI Ziyu, LI Qing, ZHU Zhiyuan, et al. Multi-scale spatio-temporal fusion with adaptive brain topology learning for fMRI based neural decoding[J]. IEEE Journal of Biomedical and Health Informatics, 2024, 28(1): 262–272. doi: 10.1109/JBHI.2023.3327023.
    [16]
    LIU Rui, HUANG Zhian, HU Yao, et al. Spatio-temporal hybrid attentive graph network for diagnosis of mental disorders on fMRI time-series data[J]. IEEE Transactions on Emerging Topics in Computational Intelligence, 2024, 8(6): 4046–4058. doi: 10.1109/TETCI.2024.3386612.
    [17]
    MITRA A, SNYDER A Z, HACKER C D, et al. Lag structure in resting-state fMRI[J]. Journal of Neurophysiology, 2014, 111(11): 2374–2391. doi: 10.1152/jn.00804.2013.
    [18]
    WU Zonghan, PAN Shirui, LONG Guodong, et al. Graph WaveNet for deep spatial-temporal graph modeling[C]. The Twenty-Eighth International Joint Conference on Artificial Intelligence, Macao, China, 2019: 1907–1913. doi: 10.24963/ijcai.2019/264.
    [19]
    ASHBURNER J. SPM: A history[J]. NeuroImage, 2012, 62(2): 791–800. doi: 10.1016/j.neuroimage.2011.10.025.
    [20]
    WHITFIELD-GABRIELI S and NIETO-CASTANON A. CONN: A functional connectivity toolbox for correlated and anticorrelated brain networks[J]. Brain Connectivity, 2012, 2(3): 125–141. doi: 10.1089/brain.2012.0073.
    [21]
    KINGMA D P and BA J. Adam: A method for stochastic optimization[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015.
    [22]
    RIAZ A, ASAD M, ALONSO E, et al. DeepFMRI: End-to-end deep learning for functional connectivity and classification of ADHD using fMRI[J]. Journal of Neuroscience Methods, 2020, 335: 108506. doi: 10.1016/j.jneumeth.2019.108506.
    [23]
    DOU Chengfeng, ZHANG Shikun, WANG Hanping, et al. ADHD fMRI short-time analysis method for edge computing based on multi-instance learning[J]. Journal of Systems Architecture, 2020, 111: 101834. doi: 10.1016/j.sysarc.2020.101834.
    [24]
    ZHAO Kanhao, DUKA B, XIE Hua, et al. A dynamic graph convolutional neural network framework reveals new insights into connectome dysfunctions in ADHD[J]. NeuroImage, 2022, 246: 118774. doi: 10.1016/j.neuroimage.2021.118774.
    [25]
    KIM B, PARK J, KIM T, et al. Finding essential parts of the brain in rs-fMRI can improve ADHD diagnosis using deep learning[J]. IEEE Access, 2023, 11: 116065–116075. doi: 10.1109/ACCESS.2023.3324670.
    [26]
    PEI Shengbing, HE Fan, CAO Shuai, et al. Learning meta-stable state transition representation of brain function for ADHD identification[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 2530713. doi: 10.1109/TIM.2023.3324338.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(6)  / Tables(5)

    Article Metrics

    Article views (176) PDF downloads(48) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return