Advanced Search
Volume 47 Issue 3
Mar.  2025
Turn off MathJax
Article Contents
ZHAO Zijian, XU Shuwen, SHUI Penglang. A Network Model for Sea Surface Small Targets Classification Based on Multidomain Radar Echo Data Fusion[J]. Journal of Electronics & Information Technology, 2025, 47(3): 696-706. doi: 10.11999/JEIT240818
Citation: ZHAO Zijian, XU Shuwen, SHUI Penglang. A Network Model for Sea Surface Small Targets Classification Based on Multidomain Radar Echo Data Fusion[J]. Journal of Electronics & Information Technology, 2025, 47(3): 696-706. doi: 10.11999/JEIT240818

A Network Model for Sea Surface Small Targets Classification Based on Multidomain Radar Echo Data Fusion

doi: 10.11999/JEIT240818 cstr: 32379.14.JEIT240818
Funds:  The National Natural Science Foundation of China (62371382)
  • Received Date: 2024-09-24
  • Rev Recd Date: 2025-02-21
  • Available Online: 2025-03-06
  • Publish Date: 2025-03-01
  •   Objective   Small target recognition on the sea surface is a critical and challenging task in maritime radar surveillance. The variety of small targets and the complexity of the sea surface environment make their classification difficult. Due to the small size of these targets, typically occupying only one or a few range cells under high-resolution radar systems, there is insufficient spatial scattering structure information for classification. The primary information for classification comes from the target’s Radar Cross Section (RCS) fluctuation and radial velocity change. This study proposes a classification network model based on multidomain radar echo data fusion, providing a theoretical foundation for small target recognition in complex sea surface environments.  Methods   A small marine target classification network model is proposed, based on multidomain radar echo data fusion, incorporating both time domain and time-frequency domain. Given that data from different domains hold distinct physical significance, a Time-domain LeNet (T-LeNet) neural network module and a time-frequency feature extraction neural network module are designed to extract features from the amplitude sequence and the Time-Frequency Distribution (TFD), respectively. The amplitude sequence primarily reflects the fluctuation characteristics of the target’s RCS, while the TFD captures both the RCS fluctuations and variations in the target’s radial velocity. By extracting deep information from small sea surface targets, effective differential features are obtained, leading to improved classification results. The advantages of the multidomain data fusion approach are validated through ablation experiments, where the amplitude sequence is fused with the input TFD, or the TFD is fused with the input amplitude sequence. Additionally, the effect of network depth on recognition performance is explored by using ResNet architectures with varying depths for time-frequency feature extraction.  Results and Discussions   A dataset containing four types of small sea surface targets is constructed using measured data to evaluate the effectiveness of the proposed method. Six evaluation metrics are used to assess the model’s classification ability. The experimental results show that when only the TFD is input, the best recognition performance is achieved by the ResNet18 network. This is due to ResNet18’s ability to prevent gradient vanishing and explosion through residual connections, enabling a deeper network capable of more effectively extracting differential features between targets. When only the amplitude sequence is input, the recognition performance of the T-LeNet network improves significantly compared to the performance with only the TFD input. Fusing the amplitude sequence with the T-LeNet network, based solely on the input of the TFD, leads to a notable increase in recognition performance. Thus, incorporating information from other domains, such as time-domain information (amplitude sequence), and extracting abstract features from one-dimensional data with T-LeNet, while also capturing deeper target features from multidomain and multidimensional aspects, significantly enhances the network’s recognition capability. The best recognition performance occurs when both the amplitude sequence and TFD are input using the ResNet18 network, achieving an accuracy of 97.21%, which represents a 21.1% improvement over the TFD-only input with the Vgg16 network (Table 3). The confusion matrix reveals that Class I and Class II targets are more accurately classified when using only the amplitude sequence, with average accuracy improvements of 5.5% and 85.1%, respectively, compared to the TFD-only input. Class IV targets are better classified when using only the TFD, with an average accuracy improvement of 5.5% compared to the amplitude sequence input. There is no significant difference in the accuracy of Class III targets (Fig. 5). Comparing the classification results of different ResNet networks shows that increasing the depth of the ResNet network does not significantly enhance recognition performance (Table 4). Analyzing the loss and accuracy of the various experiments in both the training and validation sets reveals that combining the T-LeNet network improves performance further. Specifically, the accuracy of AlexNet, Vgg16, and ResNet18 in the validation set improves by approximately 7.7%, 5.3% and 3.6%, respectively, while the loss in both the training and validation sets decreases (Fig. 6).  Conclusions   This paper proposes a small sea surface target classification method based on Convolutional Neural Networks (CNN) and data fusion. The method considers both the time domain and time-frequency domain, leveraging their distinct physical significance. It constructs the T-LeNet network module and the time-frequency feature extraction network module to extract deep information from small sea surface targets across multiple domains and dimensions. The abstract features jointly extracted from the time domain and time-frequency domain are fused for multidomain and multidimensional classification. The experimental results demonstrate that the proposed method exhibits strong recognition capability for small sea surface targets.
  • loading
  • [1]
    ZHANG Tianwen, ZHANG Xiaoling, KE Xiao, et al. HOG-ShipCLSNet: A novel deep learning network with HOG feature fusion for SAR ship classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5210322. doi: 10.1109/TGRS.2021.3082759.
    [2]
    关键. 雷达海上目标特性综述[J]. 雷达学报, 2020, 9(4): 674–683. doi: 10.12000/JR20114.

    GUAN Jian. Summary of marine radar target characteristics[J]. Journal of Radars, 2020, 9(4): 674–683. doi: 10.12000/JR20114.
    [3]
    NI Jun, ZHANG Fan, YIN Qiang, et al. Random neighbor pixel-block-based deep recurrent learning for polarimetric SAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(9): 7557–7569. doi: 10.1109/TGRS.2020.3037209.
    [4]
    LEE S J, LEE M J, KIM K T, et al. Classification of ISAR images using variable cross-range resolutions[J]. IEEE Transactions on Aerospace and Electronic Systems, 2018, 54(5): 2291–2303. doi: 10.1109/TAES.2018.2814211.
    [5]
    XU Shuwen, RU Hongtao, LI Dongchen, et al. Marine radar small target classification based on block-whitened time-frequency spectrogram and pre-trained CNN[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5101311. doi: 10.1109/TGRS.2023.3240693.
    [6]
    GUO Zixun and SHUI Penglang. Anomaly based sea-surface small target detection using K-nearest neighbor classification[J]. IEEE Transactions on Aerospace and Electronic Systems, 2020, 56(6): 4947–4964. doi: 10.1109/TAES.2020.3011868.
    [7]
    KUO B C, HO H H, LI C H, et al. A kernel-based feature selection method for SVM with RBF kernel for hyperspectral image classification[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2014, 7(1): 317–326. doi: 10.1109/JSTARS.2013.2262926.
    [8]
    YIN Qiang, CHENG Jianda, ZHANG Fan, et al. Interpretable POLSAR image classification based on adaptive-dimension feature space decision tree[J]. IEEE Access, 2020, 8: 173826–173837. doi: 10.1109/ACCESS.2020.3023134.
    [9]
    JIA Qingwei, DENG Tingquan, WANG Yan, et al. Discriminative label correlation based robust structure learning for multi-label feature selection[J]. Pattern Recognition, 2024, 154: 110583. doi: 10.1016/j.patcog.2024.110583.
    [10]
    ZHONG Jingyu, SHANG Ronghua, ZHAO Feng, et al. Negative label and noise information guided disambiguation for partial multi-label learning[J]. IEEE Transactions on Multimedia, 2024, 26: 9920–9935. doi: 10.1109/TMM.2024.3402534.
    [11]
    ZHAO Jie, LING Yun, HUANG Faliang, et al. Incremental feature selection for dynamic incomplete data using sub-tolerance relations[J]. Pattern Recognition, 2024, 148: 110125. doi: 10.1016/j.patcog.2023.110125.
    [12]
    ZOU Yizhang, HU Xuegang, and LI Peipei. Gradient-based multi-label feature selection considering three-way variable interaction[J]. Pattern Recognition, 2024, 145: 109900. doi: 10.1016/j.patcog.2023.109900.
    [13]
    SUN Xu, GAO Junyu, and YUAN Yuan. Alignment and fusion using distinct sensor data for multimodal aerial scene classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 5626811. doi: 10.1109/TGRS.2024.3406697.
    [14]
    WU Xin, HONG Danfeng, and CHANUSSOT J. Convolutional neural networks for multimodal remote sensing data classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5517010. doi: 10.1109/TGRS.2021.3124913.
    [15]
    DUAN Guoxing, WANG Yunhua, ZHANG Yanmin, et al. A network model for detecting marine floating weak targets based on multimodal data fusion of radar echoes[J]. Sensors, 2022, 22(23): 9163. doi: 10.3390/s22239163.
    [16]
    Cognitive Systems Laboratory. The IPIX radar database[EB/OL]. http://soma.ece.mcmaster.ca/ipix, 2001.
    [17]
    The Defense, Peace, Safety, and Security Unit of the Council for Scientific and Industrial Research. The Fynmeet radar database[EB/OL]. http://www.csir.co.ca/small_boat_detection, 2007.
    [18]
    RICHARD C. Time-frequency-based detection using discrete-time discrete-frequency Wigner distributions[J]. IEEE Transactions on Signal Processing, 2002, 50(9): 2170–2176. doi: 10.1109/TSP.2002.801927.
    [19]
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
    [20]
    KAYED M, ANTER A, and MOHAMED H. Classification of garments from fashion MNIST dataset using CNN LeNet-5 architecture[C]. 2020 International Conference on Innovative Trends in Communication and Computer Engineering (ITCE), Aswan, Egypt, 2020: 238–243. doi: 10.1109/ITCE48509.2020.9047776.
    [21]
    KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84–90. doi: 10.1145/3065386.
    [22]
    SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(6)  / Tables(5)

    Article Metrics

    Article views (478) PDF downloads(81) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return