Advanced Search
Volume 47 Issue 3
Mar.  2025
Turn off MathJax
Article Contents
XIE Lixia, SHI Jingchen, YANG Hongyu, HU Ze, CHENG Xiang. Membership Inference Attacks Based on Graph Neural Network Model Calibration[J]. Journal of Electronics & Information Technology, 2025, 47(3): 780-791. doi: 10.11999/JEIT240477
Citation: XIE Lixia, SHI Jingchen, YANG Hongyu, HU Ze, CHENG Xiang. Membership Inference Attacks Based on Graph Neural Network Model Calibration[J]. Journal of Electronics & Information Technology, 2025, 47(3): 780-791. doi: 10.11999/JEIT240477

Membership Inference Attacks Based on Graph Neural Network Model Calibration

doi: 10.11999/JEIT240477 cstr: 32379.14.JEIT240477
Funds:  The Civil Aviation Joint Research Fund Project of the National Natural Science Foundation of China (U2433205), The National Natural Science Foundation of China (62201576, U1833107), Jiangsu Provincial Basic Research Program Natural Science Foundation-Youth Fund (BK20230558)
  • Received Date: 2024-06-12
  • Rev Recd Date: 2025-02-17
  • Available Online: 2025-02-26
  • Publish Date: 2025-03-01
  •   Objective  Membership Inference Attacks (MIAs) against machine learning models represent a significant threat to the privacy of training data. The primary goal of MIAs is to determine whether specific data samples are part of a target model’s training set. MIAs reveal potential privacy vulnerabilities in artificial intelligence models, making them a critical area of research in AI security. Investigating MIAs not only helps security researchers assess model vulnerabilities to such attacks but also provides a theoretical foundation for establishing guidelines for the use of sensitive data and developing strategies to improve model security. In recent years, Graph Neural Network (GNN) models have become a key focus in MIAs research. However, GNN models often exhibit under-confidence in their predictions, marked by cautious probability distributions in model outputs. This issue prevents existing MIAs methods from fully utilizing posterior probability information, resulting in reduced attack accuracy and higher false negative rates. These challenges significantly limit the effectiveness and applicability of current attack methods. Therefore, addressing the under-confidence problem in GNN predictions and developing enhanced MIA approaches to improve attack performance has become both necessary and urgent.  Methods  Given that GNN models are often characterized by under-confidence in their predictions, which hampers the implementation of MIAs and resulting in high false negative rates, an MIAs method based on GNN Model Calibration (MIAs-MC) is proposed (Fig. 1). First, a GNN model calibration method based on causal inference is designed and applied. This approach involves extracting causal graphs using an attention mechanism, decoupling causal and non-causal graphs, applying a backdoor adjustment strategy, and generating causal association graphs, which are then used to train the GNN model (Fig. 2). Next, a shadow GNN model is constructed using shadow causal association graphs that share the same data distribution as the target causal association graph, enabling the shadow models to mimic the performance of the target GNN model. Finally, posterior probabilities from the shadow GNN model are used to create an attack dataset, which is employed to train an attack model. This attack model is then used to infer whether a target node is part of the training data of the target GNN model, based on the posterior probabilities generated by the target GNN model.  Results and Discussions  To assess the feasibility and effectiveness of the proposed attack method, two attack modes are implemented in the experiment, and MIAs are conducted under both modes. The experimental results demonstrate that the proposed method consistently outperforms the baseline attack method across various metrics. In Attack Mode 1, the proposed method is evaluated on the Cora, CiteSeer, PubMed, and Flickr datasets, with comparative results presented against the baseline method (Table 2 and Table 3). Compared to the baseline attack method, the proposed method achieves improvements in attack accuracy and attack precision for GCN, GAT, GraphSAGE, and SGC models, ranging from 3.4% to 35.0% and 1.2% to 34.6%, respectively. Furthermore, the results indicate that after GNN model calibration, the shadow model more effectively mimics the prediction behavior of the target model, contributing to an increased success rate of MIAs on the target model (Table 4 and Table 5). Notably, the GAT model exhibits high robustness against MIAs, both for the proposed and baseline methods. In Attack Mode 2, the attack performance of the proposed method is compared with the baseline method across the same datasets (Cora, CiteSeer, PubMed, and Flickr) (Fig. 4, Fig. 5, and Fig. 6). The proposed method improves attack accuracy by 0.4% to 32.1%, attack precision by 0.3% to 31.8%, and reduces the average attack false negative rate by 31.2%, compared to the baseline methods. Overall, the results from both attack modes indicate that calibrating the GNN model and training the attack model with the calibrated GNN posterior probabilities significantly enhances the performance of MIAs. However, the attack performance varies across different datasets and model architectures. Analysis of the experimental results reveals that the effectiveness of the proposed method is influenced by the structural characteristics of the graph datasets and the specific configurations of the GNN architectures.  Conclusions  The proposed MIAs method, based on GNN model calibration, constructs a causal association graph using a calibration technique rooted in causal inference. This causal association graph is subsequently used to build shadow GNN models and attack models, facilitating MIAs on target GNN models. The results verify that GNN model calibration enhances the effectiveness of MIAs.
  • loading
  • [1]
    SHOKRI R, STRONATI M, SONG Congzheng, et al. Membership inference attacks against machine learning models[C]. 2017 IEEE Symposium on Security and Privacy (SP), San Jose, USA, 2017: 3–18. doi: 10.1109/SP.2017.41.
    [2]
    SALEM A, ZHANG Yang, HUMBERT M, et al. ML-Leaks: Model and data independent membership inference attacks and defenses on machine learning models[C]. The Network and Distributed System Security Symposium (NDSS), San Diego, USA, 2019: 24–27.
    [3]
    LONG Yunhui, WANG Lei, BU Diyue, et al. A pragmatic approach to membership inferences on machine learning models[C]. 2020 IEEE European Symposium on Security and Privacy (EuroS&P), Genoa, Italy, 2020: 521–534. doi: 10.1109/EuroSP48549.2020.00040.
    [4]
    KO M, JIN Ming, WANG Chenguang, et al. Practical membership inference attacks against large-scale multi-modal models: A pilot study[C]. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2023: 4848–4858. doi: 10.1109/ICCV51070.2023.00449.
    [5]
    CHOQUETTE-CHOO C A, TRAMER F, CARLINI N, et al. Label-only membership inference attacks[C]. The 38th International Conference on Machine Learning, 2021: 1964–1974.
    [6]
    LIU Han, WU Yuhao, YU Zhiyuan, et al. Please tell me more: Privacy impact of explainability through the lens of membership inference attack[C]. 2024 IEEE Symposium on Security and Privacy (SP), San Francisco, USA, 2024: 120–120. doi: 10.1109/SP54263.2024.00120.
    [7]
    吴博, 梁循, 张树森, 等. 图神经网络前沿进展与应用[J]. 计算机学报, 2022, 45(1): 35–68. doi: 10.11897/SP.J.1016.2022.00035.

    WU Bo, LIANG Xun, ZHANG Shusen, et al. Advances and applications in graph neural network[J]. Chinese Journal of Computers, 2022, 45(1): 35–68. doi: 10.11897/SP.J.1016.2022.00035.
    [8]
    OLATUNJI I E, NEJDL W, and KHOSLA M. Membership inference attack on graph neural networks[C]. 2021 Third IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA), Atlanta, USA, 2021: 11–20. doi: 10.1109/TPSISA52974.2021.00002.
    [9]
    HE Xinlei, WEN Rui, WU Yixin, et al. Node-level membership inference attacks against graph neural networks[EB/OL]. https://arxiv.org/abs/2102.05429, 2021.
    [10]
    WU Bang, YANG Xiangwen, PAN Shirui, et al. Adapting membership inference attacks to GNN for graph classification: Approaches and implications[C]. 2021 IEEE International Conference on Data Mining (ICDM), Auckland, New Zealand, 2021: 1421–1426. doi: 10.1109/ICDM51629.2021.00182.
    [11]
    WANG Xiuling and WANG W H. Link membership inference attacks against unsupervised graph representation learning[C]. The 39th Annual Computer Security Applications Conference, Austin, USA, 2023: 477–491. doi: 10.1145/3627106.3627115.
    [12]
    WANG Xiao, LIU Hongrui, SHI Chuan, et al. Be confident! Towards trustworthy graph neural networks via confidence calibration[C]. The 35th Conference on Neural Information Processing Systems, 2021: 1820.
    [13]
    HSU H H H, SHEN Y, TOMANI C, et al. What makes graph neural networks miscalibrated?[C]. The 36th Conference on Neural Information Processing Systems, New Orleans, USA, 2022: 1001.
    [14]
    LIU Tong, LIU Yushan, HILDEBRANDT M, et al. On calibration of graph neural networks for node classification[C]. 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 2022: 1–8. doi: 10.1109/IJCNN55064.2022.9892866.
    [15]
    YANG Zhilin, COHEN W W, and SALAKHUTDINOV R. Revisiting semi-supervised learning with graph embeddings[C]. The 33rd International Conference on International Conference on Machine Learning, New York, USA, 2016: 40–48.
    [16]
    ZENG Hanqing, ZHOU Hongkuan, SRIVASTAVA A, et al. GraphSAINT: Graph sampling based inductive learning method[C]. The 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020: 1–19.
    [17]
    KIPF T N and WELLING M. Semi-supervised classification with graph convolutional networks[C]. The 5th International Conference on Learning Representations, Toulon, France, 2017: 1–14.
    [18]
    VELIČKOVIĆ P, CUCURULL G, CASANOVA A, et al. Graph attention networks[C]. The 6th International Conference on Learning Representations, Vancouver, Canada, 2018: 1–12.
    [19]
    HAMILTON W, YING Z, and LESKOVEC J. Inductive representation learning on large graphs[C]. Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, USA, 2017: 1025–1035.
    [20]
    WU F, SOUZA A, ZHANG Tianyi, et al. Simplifying graph convolutional networks[C]. The 36th International Conference on Machine Learning, Long Beach, USA, 2019: 6861–6871.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(6)  / Tables(6)

    Article Metrics

    Article views (343) PDF downloads(68) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return