高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

空中对抗场景下对比学习驱动的弱监督机动识别方法

朱龙俊 袁伟伟 门雪峰 童伟 吴奇

朱龙俊, 袁伟伟, 门雪峰, 童伟, 吴奇. 空中对抗场景下对比学习驱动的弱监督机动识别方法[J]. 电子与信息学报, 2025, 47(11): 4504-4514. doi: 10.11999/JEIT250495
引用本文: 朱龙俊, 袁伟伟, 门雪峰, 童伟, 吴奇. 空中对抗场景下对比学习驱动的弱监督机动识别方法[J]. 电子与信息学报, 2025, 47(11): 4504-4514. doi: 10.11999/JEIT250495
ZHU Longjun, YUAN Weiwei, MEN Xuefeng, TONG Wei, WU Qi. Weakly Supervised Recognition of Aerial Adversarial Maneuvers via Contrastive Learning[J]. Journal of Electronics & Information Technology, 2025, 47(11): 4504-4514. doi: 10.11999/JEIT250495
Citation: ZHU Longjun, YUAN Weiwei, MEN Xuefeng, TONG Wei, WU Qi. Weakly Supervised Recognition of Aerial Adversarial Maneuvers via Contrastive Learning[J]. Journal of Electronics & Information Technology, 2025, 47(11): 4504-4514. doi: 10.11999/JEIT250495

空中对抗场景下对比学习驱动的弱监督机动识别方法

doi: 10.11999/JEIT250495 cstr: 32379.14.JEIT250495
基金项目: 国家自然科学基金(T2325018, 62171274),江苏省自然科学基金(BK20240641)
详细信息
    作者简介:

    朱龙俊:女,博士生,副教授,研究方向为模式识别、机器学习和鲁棒控制等

    袁伟伟:女,教授,研究方向为数据挖掘、智能计算及人工智能的航空航天应用等

    门雪峰:男,工程师,研究方向为深度学习、脑认知技术应用

    童伟:男,博士,研究方向为机器人视觉、人机交互、医学图像分析和脑认知

    吴奇:男,教授,研究方向为深度学习、疲劳识别和人机交互等

    通讯作者:

    吴奇 edmondqwu@163.com

  • 中图分类号: TN911.7; TP181.8; TP391.41

Weakly Supervised Recognition of Aerial Adversarial Maneuvers via Contrastive Learning

Funds: The National Natural Science Foundation of China (T2325018, 62171274), Natural Science Foundation of Jiangsu Province (BK20240641)
  • 摘要: 针对空中对抗场景中飞行机动标注数据获取困难、时序特征提取不充分等问题,该文提出一种基于对比学习的弱监督机动识别方法,旨在提升机动识别性能。通过将视觉表征对比学习的简单框架(SimCLR)创新性地扩展至时间序列分析,设计针对时间序列的数据增强策略,构建具有时序不变性的特征空间。进而结合对比学习机制,在特征空间内形成正负样本组的竞争关系,有效抑制伪标签噪声干扰。最后结合微调技术,在DCS World飞行模拟数据上进行实验验证。结果表明,该方法能有效利用时间序列数据潜在信息,在缺乏标注数据情况下展现出良好性能,为空中对抗机动识别及时间序列分析领域提供了新的思路与方法。
  • 图  1  基于对比学习的弱监督机动识别框架

    图  2  两种方案的准确率受微调比例影响对比

    图  3  两种方案的平均准确率比较

    表  1  机动数据集构成

    数据集
    层级
    数据集
    标识
    包含的机动类型
    基础层D1半舵翻滚、俯冲、横滚、盘旋、爬升
    D2半舵翻滚、横滚、急转、爬升、旋降
    D3半舵翻滚、俯冲、爬升、尾冲、旋降
    融合层D4半舵翻滚、俯冲、横滚、急转、盘旋、爬升、旋降
    D5半舵翻滚、俯冲、横滚、盘旋、爬升、尾冲、旋降
    D6半舵翻滚、俯冲、横滚、急转、爬升、尾冲、旋降
    全量层D7半舵翻滚、俯冲、横滚、急转、盘旋、
    爬升、尾冲、旋降
    下载: 导出CSV

    表  2  微调比例划分情况

    场景数据范围(%)间隔(%)总测试点个数
    极低数据场景2,4,6,8,1025
    中低数据场景12,14,16,18,2025
    低数据场景22,24,26,28,3025
    下载: 导出CSV

    表  3  极低数据场景识别准确率

    微调比例(%)D1D2D3D4D5D6D7
    BMVMBMVMBMVMBMVMBMVMBMVMBMVM
    20.3560.5450.3280.2810.5330.4660.4290.5400.3310.3200.4200.4760.3160.327
    40.6750.8280.6040.6250.7300.7420.6270.7450.5530.6250.6050.7550.5570.696
    60.7510.8930.6820.7650.7760.8320.7000.8020.6700.7800.6780.8240.6550.774
    80.8200.9550.7080.8350.8030.8700.7380.8550.6980.8140.6910.8570.6980.849
    100.8300.9580.6940.8160.8050.8320.7510.8590.7260.8670.7100.8670.7070.847
    下载: 导出CSV

    表  4  中低数据场景识别准确率

    微调比例(%)D1D2D3D4D5D6D7
    BMVMBMVMBMVMBMVMBMVMBMVMBMVM
    120.8430.9370.7580.9560.8380.9170.7630.8590.7130.8570.7180.8860.7630.908
    140.8620.9800.7630.8860.8400.9170.7670.8590.7360.8860.7350.9010.7570.861
    160.8670.9800.7750.9090.8670.9810.7680.8310.7550.9010.7550.9300.7700.873
    180.8710.9580.7790.8860.8900.9990.7810.8590.7490.9150.7590.9300.7750.896
    200.8750.9800.8030.9790.8790.9590.7680.8160.7590.9150.7640.9300.7730.896
    下载: 导出CSV

    表  5  低数据场景识别准确率

    微调比例(%)D1D2D3D4D5D6D7
    BMVMBMVMBMVMBMVMBMVMBMVMBMVM
    220.8900.9770.7830.8620.8880.9790.7750.8300.7710.9120.7650.9260.7830.916
    240.8830.9790.7810.8590.8890.9990.7980.8550.7530.8970.7610.9150.7720.895
    260.8820.9760.7980.8850.8860.9980.8150.8980.7700.9150.7730.9260.7820.918
    280.8920.9770.7950.9050.8870.9990.8170.9020.7710.9120.7750.9280.7800.918
    300.8950.9790.7910.8610.8700.9770.8160.9150.7720.9150.7680.9280.7890.928
    下载: 导出CSV

    表  6  不同数据增强策略在各数据集上的识别准确率对比

    数据集 时间压缩 缩放 排列 掩蔽 翻转 随机组合
    D1 0.900 0.905 0.890 0.910 0.895 0.925
    D2 0.805 0.800 0.790 0.808 0.795 0.821
    D3 0.875 0.880 0.865 0.883 0.870 0.895
    下载: 导出CSV

    表  7  Voting方案与各基线模型的机动识别平均准确率对比

    数据集 LSTM GRU T-Rep XGBoost Rocket MLP TimesNet Voting
    D1 0.900 0.924 0.915 0.898 0.876 0.912 0.899 0.925
    D2 0.813 0.800 0.815 0.699 0.781 0.789 0.820 0.821
    D3 0.859 0.851 0.888 0.758 0.802 0.851 0.885 0.895
    D4 0.782 0.775 0.751 0.764 0.736 0.774 0.802 0.828
    D5 0.752 0.776 0.731 0.860 0.742 0.781 0.794 0.829
    D6 0.764 0.763 0.761 0.808 0.682 0.775 0.808 0.863
    D7 0.782 0.794 0.738 0.803 0.730 0.761 0.766 0.834
    下载: 导出CSV
  • [1] TIAN Yonglong, SUN Chen, POOLE B, et al. What makes for good views for contrastive learning?[C]. The 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 573.
    [2] CHUANG C Y, ROBINSON J, LIN Y C, et al. Debiased contrastive learning[C]. The 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 735.
    [3] ZHENG Mingkai, WANG Fei, YOU Shan, et al. Weakly supervised contrastive learning[C]. IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 10022–10031. doi: 10.1109/ICCV48922.2021.00989.
    [4] KHOSLA P, TETERWAK P, WANG Chen, et al. Supervised contrastive learning[C]. The 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 1567.
    [5] WU Linshan, ZHUANG Jiaxin, and CHEN Hao. VoCo: A simple-yet-effective volume contrastive learning framework for 3D medical image analysis[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2024: 22873–22882. doi: 10.1109/CVPR52733.2024.02158.
    [6] CHEN Ting, KORNBLITH S, NOROUZI M, et al. A simple framework for contrastive learning of visual representations[C]. The 37th International Conference on Machine Learning, 2020: 149.
    [7] KUANG Haofei, ZHU Yi, ZHANG Zhi, et al. Video contrastive learning with global context[C]. IEEE/CVF International Conference on Computer Vision Workshops, Montreal, Canada, 2021: 3188. doi: 10.1109/ICCVW54120.2021.00358.
    [8] SUNG C, KIM W, AN J, et al. Contextrast: Contextual contrastive learning for semantic segmentation[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2024: 3732–3742. doi: 10.1109/CVPR52733.2024.00358.
    [9] WU Zhuofeng, WANG Sinong, GU Jiatao, et al. CLEAR: Contrastive learning for sentence representation[J]. arXiv preprint arXiv: 2012.15466, 2020. doi: 10.48550/arXiv.2012.15466.
    [10] SPIJKERVET J and BURGOYNE J A. Contrastive learning of musical representations[C/OL]. The 22nd International Society for Music Information Retrieval Conference, 2021: 673–681.
    [11] 倪世宏, 史忠科, 谢川, 等. 军用战机机动飞行动作识别知识库的建立[J]. 计算机仿真, 2005, 22(4): 23–26. doi: 10.3969/j.issn.1006-9348.2005.04.007.

    NI Shihong, SHI Zhongke, XIE Chuan, et al. Establishment of avion inflight maneuver action recognizing knowledge base[J]. Computer Simulation, 2005, 22(4): 23–26. doi: 10.3969/j.issn.1006-9348.2005.04.007.
    [12] 孟光磊, 陈振, 罗元强. 基于动态贝叶斯网络的机动动作识别方法[J]. 系统仿真学报, 2017, 29(S1): 140–145. doi: 10.16182/j.issn1004731x.joss.2017S1020.

    MENG Guanglei, CHEN Zhen, and LUO Yuanqiang. Maneuvering action identify method based on dynamic Bayesian network[J]. Journal of System Simulation, 2017, 29(S1): 140–145. doi: 10.16182/j.issn1004731x.joss.2017S1020.
    [13] WANG Yongjun, DONG Jiang, LIU Xiaodong, et al. Identification and standardization of maneuvers based upon operational flight data[J]. Chinese Journal of Aeronautics, 2015, 28(1): 133–140. doi: 10.1016/j.cja.2014.12.026.
    [14] LI Xiaokang, ZHU Tianyi, BIAN Zimu, et al. An improved algorithm for flight maneuver recognition and evaluation based on support vector machines[C]. 2024 International Conference on Cyber-Physical Social Intelligence (ICCSI), Doha, Qatar, 2024: 1–6. doi: 10.1109/ICCSI62669.2024.10799254.
    [15] XI Zhifei, LYU Yue, KOU Yingxin, et al. An online ensemble semi-supervised classification framework for air combat target maneuver recognition[J]. Chinese Journal of Aeronautics, 2023, 36(6): 340–360. doi: 10.1016/j.cja.2023.04.020.
    [16] WEI Zhenglei, DING Dali, ZHOU Huan, et al. A flight maneuver recognition method based on multi-strategy affine canonical time warping[J]. Applied Soft Computing, 2020, 95: 106527. doi: 10.1016/j.asoc.2020.106527.
    [17] LU Jing, CHAI Hongjun, and JIA Ruchun. A general framework for flight maneuvers automatic recognition[J]. Mathematics, 2022, 10(7): 1196. doi: 10.3390/math10071196.
    [18] WANG Can, TU Jingqi, YANG Xizhong, et al. Explainable basic-fighter-maneuver decision support scheme for piloting within-visual-range air combat[J]. Journal of Aerospace Information Systems, 2024, 21(6): 500–514. doi: 10.2514/1.I011388.
    [19] LEI Xie, SHILIN D, SHANGQIN T, et al. Beyond visual range maneuver intention recognition based on attention enhanced tuna swarm optimization parallel BiGRU[J]. Complex & Intelligent Systems, 2024, 10(2): 2151–2172. doi: 10.1007/s40747-023-01257-3.
    [20] TIAN Wei, ZHANG Hong, LI Hui, et al. Flight maneuver intelligent recognition based on deep variational autoencoder network[J]. EURASIP Journal on Advances in Signal Processing, 2022, 2022(1): 21. doi: 10.1186/s13634-022-00850-x.
    [21] LUO Dongsheng, CHENG Wei, WANG Yingheng, et al. Time series contrastive learning with information-aware augmentations[C]. The 37th AAAI Conference on Artificial Intelligence, Washington, USA, 2023: 4534–4542. doi: 10.1609/aaai.v37i4.25575.
    [22] CHEN Muxi, XU Zhijian, ZENG Ailing, et al. FrAug: Frequency domain augmentation for time series forecasting[J]. arXiv preprint arXiv: 2302.09292, 2023. doi: 10.48550/arXiv.2302.09292.
    [23] HADSELL R, CHOPRA S, and LECUN Y. Dimensionality reduction by learning an invariant mapping[C]. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, USA, 2006: 1735–1742. doi: 10.1109/CVPR.2006.100.
    [24] SCHROFF F, KALENICHENKO D, and PHILBIN J. FaceNet: A unified embedding for face recognition and clustering[C]. IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 815–823. doi: 10.1109/CVPR.2015.7298682.
    [25] SOHN K. Improved deep metric learning with multi-class n-pair loss objective[C]. The 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 2016: 1857–1865.
  • 加载中
图(3) / 表(7)
计量
  • 文章访问数:  199
  • HTML全文浏览量:  86
  • PDF下载量:  36
  • 被引次数: 0
出版历程
  • 收稿日期:  2025-06-03
  • 修回日期:  2025-08-29
  • 网络出版日期:  2025-09-08
  • 刊出日期:  2025-11-10

目录

    /

    返回文章
    返回