高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

信息熵驱动的图神经网络黑盒迁移对抗攻击方法

吴涛 纪琼辉 先兴平 乔少杰 王超 崔灿一星

吴涛, 纪琼辉, 先兴平, 乔少杰, 王超, 崔灿一星. 信息熵驱动的图神经网络黑盒迁移对抗攻击方法[J]. 电子与信息学报. doi: 10.11999/JEIT250303
引用本文: 吴涛, 纪琼辉, 先兴平, 乔少杰, 王超, 崔灿一星. 信息熵驱动的图神经网络黑盒迁移对抗攻击方法[J]. 电子与信息学报. doi: 10.11999/JEIT250303
WU Tao, JI Qionghui, XIAN Xingping, QIAO Shaojie, WANG Chao, CUI Canyixing. Entropy-Driven Black-box Transferable Adversarial Attack Method for Graph Neural Networks[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250303
Citation: WU Tao, JI Qionghui, XIAN Xingping, QIAO Shaojie, WANG Chao, CUI Canyixing. Entropy-Driven Black-box Transferable Adversarial Attack Method for Graph Neural Networks[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250303

信息熵驱动的图神经网络黑盒迁移对抗攻击方法

doi: 10.11999/JEIT250303 cstr: 32379.14.JEIT250303
基金项目: 国家自然科学基金(62376047, 62106030),重庆市自然科学基金创新发展联合基金重点项目(CSTB2023NSCQ-LZX0003),重庆市教委科学技术研究计划重点项目(KJZD-K202300603, KJZD-K202500604)
详细信息
    作者简介:

    吴涛:男,教授,博士生导师,研究方向为图神经网络、知识图谱、人工智能安全、图数据挖掘等

    纪琼辉:男,硕士生,研究方向为图神经网络、知识图谱、人工智能安全等

    先兴平:女,副教授,硕士生导师,研究方向为图数据挖掘、数据隐私保护、智能算法安全等

    乔少杰:男,教授,博士生导师,研究方向为大数据技术与应用、领域大数据分析、空间人工智能等

    王超:男,副教授,硕士生导师,研究方向为时序数据和图数据表征、自然语言处理及其应用等

    崔灿一星:女,博士生,研究方向为图神经网络、知识图谱、人工智能安全等

    通讯作者:

    先兴平 xianxp@cqupt.edu.cn

  • 中图分类号: TN915.08; TP393

Entropy-Driven Black-box Transferable Adversarial Attack Method for Graph Neural Networks

Funds: The National Natural Science Foundation of China (62376047, 62106030), Key Project of Innovation and Development Fund of Chongqing Natural Science Foundation (CSTB2023NSCQ-LZX0003), Key Project of Science and Technology Program of Chongqing Education Commission (KJZD-K202300603, KJZD-K202500604)
  • 摘要: 图神经网络(GNNs)的对抗鲁棒性对其在安全关键场景中的应用具有重要意义。近年来,对抗攻击尤其是基于迁移的黑盒攻击引起了研究人员的广泛关注,但这些方法过度依赖代理模型的梯度信息导致生成的对抗样本迁移性较差。此外,现有方法多从全局视角出发选择扰动策略导致攻击效率低下。为了解决以上问题,该文探索熵与节点脆弱性之间的关联,并创新性地提出一种全新的对抗攻击思路。具体而言,针对同构图神经网络,利用节点熵来捕获节点的邻居子图的特征平滑性,提出基于节点熵的图神经网络迁移对抗攻击方法(NEAttack)。在此基础上,提出基于图熵的异构图神经网络对抗攻击方法(GEHAttack)。通过在多个模型和数据集上的大量实验,验证了所提方法的有效性,揭示了节点熵与节点脆弱性之间的关联关系在提升对抗攻击性能中的重要作用。
  • 图  1  节点相关子图特征平滑性与其脆弱性之间的内在联系

    图  2  基于节点熵的图神经网络迁移对抗攻击方法框架

    图  3  扰动预算对攻击性能的影响

    图  4  超参数$\lambda $对攻击性能的影响

    图  5  节点熵在生成扰动时随迭代轮次变化曲线

    图  6  不同攻击方法的攻击效果比较

    表  1  同构图场景下准确率实验结果

    方法 Cora CoraML Citeseer PubMed
    Sur_Mod Vic_Mod GCN GAT SGC GCN GAT SGC GCN GAT SGC GCN GAT SGC
    Clean 0.852 1 0.859 7 0.849 1 0.849 0 0.853 4 0.827 0 0.754 7 0.769 9 0.750 6 0.867 3 0.857 9 0.791 5
    GCN RA 0.848 6 0.840 4 0.843 6 0.838 7 0.837 6 0.814 0 0.735 6 0.742 6 0.747 5 0.857 7 0.843 2 0.764 2
    DICE 0.844 1 0.843 6 0.836 5 0.825 4 0.837 2 0.807 8 0.731 3 0.754 6 0.733 4 0.856 1 0.848 7 0.761 4
    Mettack 0.839 0 0.850 6 0.820 1 0.807 9 0.840 3 0.792 3 0.721 9 0.721 0 0.729 9 0.803 1 0.806 2 0.720 2
    PGD 0.839 5 0.838 5 0.827 0 0.813 2 0.848 1 0.775 8 0.735 8 0.731 0 0.731 0 0.800 5 0.817 0 0.721 5
    AtkSE 0.821 5 0.831 1 0.794 7 0.816 1 0.825 8 0.790 5 0.719 6 0.729 3 0.719 8 OOM OOM OOM
    GraD 0.827 5 0.822 1 0.806 7 0.804 0 0.830 6 0.765 2 0.718 1 0.730 5 0.713 3 OOM OOM OOM
    NEAttack 0.808 9 0.813 4 0.792 0 0.805 2 0.809 2 0.755 3 0.714 2 0.706 8 0.703 9 0.776 2 0.782 6 0.700 6
    GAT RA 0.838 0 0.838 0 0.839 0 0.835 6 0.843 4 0.816 2 0.748 1 0.754 1 0.730 9 0.861 7 0.844 3 0.764 8
    DICE 0.845 6 0.842 6 0.834 3 0.837 2 0.833 6 0.813 6 0.746 3 0.745 9 0.724 3 0.857 9 0.837 1 0.753 6
    Mettack 0.843 6 0.840 1 0.832 5 0.812 3 0.840 3 0.755 8 0.732 8 0.735 8 0.684 8 0.814 0 0.812 3 0.722 5
    PGD 0.838 0 0.847 0 0.821 3 0.808 1 0.837 6 0.728 6 0.735 2 0.738 7 0.705 0 0.821 3 0.826 4 0.720 6
    AtkSE 0.830 2 0.812 5 0.839 5 0.810 7 0.805 8 0.790 9 0.742 9 0.717 6 0.744 7 OOM OOM OOM
    GraD 0.844 6 0.801 5 0.829 0 0.813 4 0.804 1 0.789 1 0.729 3 0.728 2 0.705 6 OOM OOM OOM
    NEAttack 0.814 9 0.804 8 0.801 3 0.796 3 0.767 3 0.688 6 0.707 5 0.725 2 0.675 4 0.782 9 0.790 4 0.686 1
    SGC RA 0.835 5 0.843 6 0.823 9 0.834 9 0.838 1 0.811 6 0.730 7 0.742 1 0.733 9 0.857 7 0.841 7 0.769 1
    DICE 0.838 4 0.845 4 0.826 5 0.821 4 0.827 7 0.805 2 0.734 6 0.738 8 0.743 8 0.854 3 0.837 2 0.764 6
    Mettack 0.841 6 0.843 1 0.823 9 0.820 3 0.838 1 0.786 9 0.747 0 0.731 8 0.726 9 0.799 5 0.800 3 0.701 0
    PGD 0.837 1 0.834 0 0.814 9 0.821 4 0.846 5 0.738 9 0.733 4 0.726 4 0.737 6 0.813 7 0.794 6 0.717 0
    AtkSE 0.845 6 0.826 5 0.810 5 0.800 3 0.811 4 0.774 5 0.724 7 0.742 3 0.719 3 OOM OOM OOM
    GraD 0.838 5 0.823 4 0.827 0 0.812 3 0.839 0 0.771 8 0.738 8 0.740 5 0.702 6 OOM OOM OOM
    NEAttack 0.822 4 0.794 8 0.805 3 0.784 6 0.787 8 0.718 0 0.725 7 0.712 3 0.669 4 0.791 7 0.799 6 0.674 7
    *Sur_Mod,Vic_Mod分别表示代理模型和目标模型;Clean表示无扰动,RA表示Random Attack;最优结果以黑体标出;OOM表示超出内存
    下载: 导出CSV

    表  2  同构图场景下F1分数实验结果

    方法CoraCoraMLCiteseerPubMed
    Sur_ModVic_ModGCNGATSGCGCNGATSGCGCNGATSGCGCNGATSGC
    Clean0.842 30.843 70.846 10.857 30.827 10.795 30.703 90.689 40.682 20.855 20.852 00.772 3
    GCNRA0.824 30.836 60.837 10.848 90.819 30.751 40.685 60.658 50.675 70.851 00.836 60.742 3
    DICE0.833 60.838 40.838 80.843 20.824 10.788 30.690 60.643 40.673 60.843 00.832 40.742 2
    Mettack0.823 10.826 50.813 20.833 60.813 80.732 80.655 50.635 70.630 50.823 00.816 20.709 1
    PGD0.824 50.820 60.819 20.827 80.805 30.754 60.663 50.639 60.639 40.816 20.820 00.711 6
    AtkSE0.816 30.829 40.813 00.808 60.815 20.737 40.648 20.644 00.637 7OOMOOMOOM
    GraD0.802 40.810 40.814 30.786 60.811 50.738 00.629 90.638 10.624 5OOMOOMOOM
    NEAttack0.786 50.803 20.783 00.792 30.786 00.700 20.632 60.623 20.603 70.778 20.792 80.665 5
    GATRA0.827 90.834 30.830 70.826 20.818 40.735 50.699 80.650 60.659 80.849 70.831 00.746 8
    DICE0.838 50.833 80.837 30.818 30.811 40.760 10.687 40.654 00.666 80.852 50.829 90.745 3
    Mettack0.811 80.810 80.820 20.838 30.795 70.733 40.670 50.649 70.658 30.818 20.811 00.705 0
    PGD0.826 40.815 20.807 90.831 70.787 60.689 50.664 40.637 50.646 20.826 10.822 50.716 5
    AtkSE0.823 20.827 10.815 30.823 10.804 20.733 30.654 50.624 80.669 4OOMOOMOOM
    GraD0.811 90.795 30.812 70.825 40.803 50.743 70.645 70.629 10.653 3OOMOOMOOM
    NEAttack0.798 50.808 60.788 50.800 90.780 00.643 80.627 40.602 40.613 60.799 50.767 30.659 9
    SGCRA0.787 90.828 30.823 80.834 20.821 20.751 20.693 20.676 40.679 90.847 60.833 60.743 7
    DICE0.791 10.836 20.836 70.838 50.812 10.767 30.696 00.662 30.669 40.842 50.839 10.741 0
    Mettack0.825 40.815 10.800 90.827 50.812 40.733 10.669 90.642 60.648 10.811 70.806 80.710 2
    PGD0.811 90.818 00.792 70.822 30.784 00.700 30.657 50.639 10.650 50.809 40.820 40.695 5
    AtkSE0.824 50.819 20.816 40.817 80.809 70.730 00.648 40.646 30.643 7OOMOOMOOM
    GraD0.806 20.809 60.809 60.825 90.800 50.736 30.655 20.635 40.632 0OOMOOMOOM
    NEAttack0.785 20.791 60.775 10.816 50.775 60.651 00.625 70.613 90.600 10.787 90.774 30.643 5
    *Sur_Mod,Vic_Mod分别表示代理模型和目标模型;Clean表示无扰动,RA表示Random Attack;最优结果以黑体标出;OOM表示超出内存
    下载: 导出CSV

    表  3  异构图场景下Micro-F1实验结果

    数据集攻击方法HANHGTSimpleHGNRGCNRoHeFastRo-HGCN
    ACMClean0.916 80.924 00.901 50.921 90.911 70.927 1
    RA0.855 00.846 30.856 10.880 00.878 70.905 0
    HGB0.800 80.844 00.719 50.836 10.905 70.921 5
    CLGA0.862 60.892 50.861 30.843 10.893 20.911 9
    GHAC0.746 20.847 20.819 50.827 50.887 70.909 6
    GEHA0.713 40.802 30.749 20.818 40.878 50.893 6
    IMDBClean0.604 30.613 70.592 60.603 70.512 00.602 9
    RA0.595 60.609 20.586 50.581 40.505 90.595 3
    HGB0.452 30.477 70.488 60.559 40.498 50.589 1
    CLGA0.571 20.600 70.577 70.578 00.500 40.595 5
    GHAC0.485 00.468 20.538 40.555 10.497 50.579 8
    GEHA0.424 70.461 70.454 60.548 40.487 00.575 0
    DBLPClean0.934 90.941 50.940 00.935 10.928 00.935 7
    RA0.902 60.922 10.927 70.915 00.917 30.930 8
    HGB0.720 60.801 50.812 00.877 10.921 20.924 8
    CLGA0.892 90.916 30.924 30.916 60.922 70.932 0
    GHAC0.809 60.847 60.885 20.910 50.917 00.922 6
    GEHA0.678 80.747 50.796 80.861 90.912 00.911 5
    * Clean:无扰动;RA:Random Attack;GEHA:GEHAttack
    下载: 导出CSV

    表  4  异构图场景下Macro-F1实验结果

    数据集攻击方法HANHGTSimpleHGNRGCNRoHeFastRo-HGCN
    ACMClean0.920 60.925 70.903 50.921 90.910 30.914 1
    RA0.848 80.838 00.854 10.885 00.884 50.882 0
    HGB0.799 60.845 70.741 50.836 10.904 30.908 5
    CLGA0.860 80.884 20.833 30.863 10.891 80.898 9
    GHAC0.755 20.848 90.821 50.817 50.886 30.896 6
    GEHA0.737 00.804 00.762 20.808 40.877 10.880 6
    IMDBClean0.582 80.591 40.573 60.613 10.536 40.612 1
    RA0.569 20.578 90.567 50.590 80.530 30.604 5
    HGB0.440 40.466 70.416 80.526 70.522 90.598 3
    CLGA0.529 50.548 30.498 20.577 50.523 80.604 7
    GHAC0.443 40.440 30.490 50.528 60.521 90.589 0
    GEHA0.398 10.429 40.389 30.507 80.511 40.584 2
    DBLPClean0.918 50.929 30.921 70.923 40.917 80.926 1
    RA0.882 80.909 90.909 40.903 30.907 10.921 2
    HGB0.731 80.789 30.793 70.785 40.911 00.915 2
    CLGA0.876 50.893 60.896 00.905 80.912 20.914 8
    GHAC0.792 60.866 20.887 40.868 70.906 20.911 9
    GEHA0.697 20.775 30.778 50.774 20.901 80.901 9
    * Clean:无扰动;RA:Random Attack;GEHA:GEHAttack
    下载: 导出CSV
  • [1] 吴涛, 曹新汶, 先兴平, 等. 图神经网络对抗攻击与鲁棒性评测前沿进展[J]. 计算机科学与探索, 2024, 18(8): 1935–1959. doi: 10.3778/j.issn.1673-9418.2311117.

    WU Tao, CAO Xinwen, XIAN Xingping, et al. Advances of adversarial attacks and robustness evaluation for graph neural networks[J]. Journal of Frontiers of Computer Science & Technology, 2024, 18(8): 1935–1959. doi: 10.3778/j.issn.1673-9418.2311117.
    [2] MU Jiaming, WANG Binghui, LI Qi, et al. A hard label black-box adversarial attack against graph neural networks[C]. Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, 2021: 108–125. doi: 10.1145/3460120.3484796. (查阅网上资料,未找到本条文献出版地信息,请确认).
    [3] ZÜGNER D, AKBARNEJAD A, and GÜNNEMANN S. Adversarial attacks on neural networks for graph data[C]. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 2018: 2847–2856. doi: 10.1145/3219819.3220078.
    [4] ZÜGNER D and GÜNNEMANN S. Adversarial attacks on graph neural networks via meta learning[C]. 7th International Conference on Learning Representations, New Orleans, USA, 2019.
    [5] SHANG Yu, ZHANG Yudong, CHEN Jiansheng, et al. Transferable structure-based adversarial attack of heterogeneous graph neural network[C]. Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, Birmingham, UK, 2023: 2188–2197. doi: 10.1145/3583780.3615095.
    [6] XU Kaidi, CHEN Hongge, LIU Sijia, et al. Topology attack and defense for graph neural networks: An optimization perspective[C]. Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, 2019: 3961–3967. doi: 10.24963/ijcai.2019/550.
    [7] ZHANG Jianping, WU Weibin, HUANG J T, et al. Improving adversarial transferability via neuron attribution-based attacks[C]. Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 14973–14982. doi: 10.1109/CVPR52688.2022.01457.
    [8] WU Jun, TAN Yuejin, DENG Hongzhong, et al. Heterogeneity of scale-free networks[J]. Systems Engineering-Theory & Practice, 2007, 27(5): 101–105. doi: 10.1016/S1874-8651(08)60036-8.
    [9] WANG Bing, TANG Huanwen, GUO Chonghui, et al. Entropy optimization of scale-free networks’ robustness to random failures[J]. Physica A: Statistical Mechanics and its Applications, 2006, 363(2): 591–596. doi: 10.1016/j.physa.2005.08.025.
    [10] 蔡萌, 杜海峰, 费尔德曼. 一种基于最大流的网络结构熵[J]. 物理学报, 2014, 63(6): 060504. doi: 10.7498/aps.63.060504.

    CAI Meng, DU Haifeng, and FELDMAN M W. A new network structure entropy based on maximum flow[J]. Acta Physica Sinica, 2014, 63(6): 060504. doi: 10.7498/aps.63.060504.
    [11] 黄丽亚, 霍宥良, 王青, 等. 基于K-阶结构熵的网络异构性研究[J]. 物理学报, 2019, 68(1): 018901. doi: 10.7498/aps.68.20181388.

    HUANG Liya, HUO Youliang, WANG Qing, et al. Network heterogeneity based on K-order structure entropy[J]. Acta Physica Sinica, 2019, 68(1): 018901. doi: 10.7498/aps.68.20181388.
    [12] CARLINI N and WAGNER D. Towards evaluating the robustness of neural networks[C]. 2017 IEEE Symposium on Security and Privacy, San Jose, USA, 2017: 39–57. doi: 10.1109/SP.2017.49.
    [13] WU Tao, YANG Nan, CHEN Long, et al. ERGCN: Data enhancement-based robust graph convolutional network against adversarial attacks[J]. Information Sciences, 2022, 617: 234–253. doi: 10.1016/j.ins.2022.10.115.
    [14] MCCALLUM A K, NIGAM K, RENNIE J, et al. Automating the construction of internet portals with machine learning[J]. Information Retrieval, 2000, 3(2): 127–163. doi: 10.1023/A:1009953814988.
    [15] GILES C L, BOLLACKER K D, and LAWRENCE S. CiteSeer: An automatic citation indexing system[C]. Proceedings of the Third ACM Conference on Digital Libraries, Pittsburgh, USA, 1998: 89–98. doi: 10.1145/276675.276685.
    [16] SEN P, NAMATA G, BILGIC M, et al. Collective classification in network data[J]. AI Magazine, 2008, 29(3): 93–106. doi: 10.1609/aimag.v29i3.2157.
    [17] WANG Xiao, LIU Nian, HAN Hui, et al. Self-supervised heterogeneous graph neural network with co-contrastive learning[C]. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Virtual Event, Singapore, 2021: 1726–1736. doi: 10.1145/3447548.3467415. (查阅网上资料,未找到本条文献出版地信息,请确认).
    [18] FU Xinyu, ZHANG Jiani, MENG Ziqiao, et al. MAGNN: Metapath aggregated graph neural network for heterogeneous graph embedding[C]. Proceedings of The Web Conference 2020, Taipei, China, 2020: 2331–2341. doi: 10.1145/3366423.3380297.
    [19] HAN Hui, ZHAO Tianyu, YANG Cheng, et al. OpenHGNN: An open source toolkit for heterogeneous graph neural network[C]. Proceedings of the 31st ACM International Conference on Information & Knowledge Management, Atlanta, USA, 2022: 3993–3997. doi: 10.1145/3511808.3557664.
    [20] MALIK H A M, ABID F, WAHIDDIN M R, et al. Robustness of dengue complex network under targeted versus random attack[J]. Complexity, 2017, 2017(1): 2515928. doi: 10.1155/2017/2515928.
    [21] WANIEK M, MICHALAK T P, WOOLDRIDGE M J, et al. Hiding individuals and communities in a social network[J]. Nature Human Behaviour, 2018, 2(2): 139–147. doi: 10.1038/s41562-017-0290-3.
    [22] LIU Zihan, LUO Yun, WU Lirong, et al. Are gradients on graph structure reliable in gray-box attacks?[C]. Proceedings of the 31st ACM International Conference on Information & Knowledge Management, Atlanta, USA, 2022: 1360–1368. doi: 10.1145/3511808.3557238.
    [23] LIU Zihan, LUO Yun, WU Lirong, et al. Towards reasonable budget allocation in untargeted graph structure attacks via gradient debias[C]. Proceedings of the 36th International Conference on Neural Information Processing Systems, New Orleans, USA, 2022: 2028.
    [24] ZHANG Mengmei, WANG Xiao, ZHU Meiqi, et al. Robust heterogeneous graph neural networks against adversarial attacks[C]. Proceedings of the 36th AAAI Conference on Artificial Intelligence, Virtual Event, 2022: 4363–4370. doi: 10.1609/aaai.v36i4.20357. (查阅网上资料,未找到本条文献出版地信息,请确认).
    [25] ZHANG Sixiao, CHEN Hongxu, SUN Xiangguo, et al. Unsupervised graph poisoning attack via contrastive loss back-propagation[C]. Proceedings of the ACM Web Conference 2022, Virtual Event, France, 2022: 1322–1330. doi: 10.1145/3485447.3512179. (查阅网上资料,未找到本条文献出版地信息,请确认).
    [26] WANG Haosen, XU Can, SHI Chenglong, et al. Unsupervised heterogeneous graph rewriting attack via node clustering[C]. Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Barcelona, Spain, 2024: 3057–3068. doi: 10.1145/3637528.3671716.
    [27] KIPF T N and WELLING M. Semi-supervised classification with graph convolutional networks[C]. 5th International Conference on Learning Representations, Toulon, France, 2017.
    [28] VELIČKOVIĆ P, CUCURULL G, CASANOVA A, et al. Graph attention networks[J]. arXiv preprint arXiv: 1710.10903, 2017. doi: 10.48550/arXiv.1710.10903. (查阅网上资料,请作者核对文献类型及格式是否正确).
    [29] WU F, SOUZA A, ZHANG Tianyi, et al. Simplifying graph convolutional networks[C]. Proceedings of the 36th International Conference on Machine Learning, Long Beach, USA, 2019: 6861–6871.
    [30] WANG Xiao, JI Houye, SHI Chuan, et al. Heterogeneous graph attention network[C]. The World Wide Web Conference, San Francisco, USA, 2019: 2022–2032. doi: 10.1145/3308558.3313562.
    [31] HU Ziniu, DONG Yuxiao, WANG Kuansan, et al. Heterogeneous graph transformer[C]. Proceedings of The Web Conference 2020, Taipei, China, 2020: 2704–2710. doi: 10.1145/3366423.3380027.
    [32] LV Qingsong, DING Ming, LIU Qiang, et al. Are we really making much progress? Revisiting, benchmarking and refining heterogeneous graph neural networks[C]. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, Singapore, 2021: 1150–1160. doi: 10.1145/3447548.3467350.
    [33] SCHLICHTKRULL M, KIPF T N, BLOEM P, et al. Modeling relational data with graph convolutional networks[C]. 15th International Conference on The Semantic Web, Heraklion, Greece, 2018: 593–607. doi: 10.1007/978-3-319-93417-4_38.
    [34] YAN Yeyu, ZHAO Zhongying, YANG Zhan, et al. A fast and robust attention-free heterogeneous graph convolutional network[J]. IEEE Transactions on Big Data, 2024, 10(5): 669–681. doi: 10.1109/TBDATA.2024.3375152.
  • 加载中
图(6) / 表(4)
计量
  • 文章访问数:  25
  • HTML全文浏览量:  13
  • PDF下载量:  3
  • 被引次数: 0
出版历程
  • 收稿日期:  2025-04-25
  • 修回日期:  2025-09-10
  • 网络出版日期:  2025-09-16

目录

    /

    返回文章
    返回