Advanced Search
Volume 47 Issue 4
Apr.  2025
Turn off MathJax
Article Contents
HUA Chengcheng, ZHOU Zhanfeng, TAO Jianlong, YANG Wenqing, LIU Jia, FU Rongrong. Virtual Reality Motion Sickness Recognition Model Driven by Lead-attention and Brain Connection[J]. Journal of Electronics & Information Technology, 2025, 47(4): 1161-1171. doi: 10.11999/JEIT240440
Citation: HUA Chengcheng, ZHOU Zhanfeng, TAO Jianlong, YANG Wenqing, LIU Jia, FU Rongrong. Virtual Reality Motion Sickness Recognition Model Driven by Lead-attention and Brain Connection[J]. Journal of Electronics & Information Technology, 2025, 47(4): 1161-1171. doi: 10.11999/JEIT240440

Virtual Reality Motion Sickness Recognition Model Driven by Lead-attention and Brain Connection

doi: 10.11999/JEIT240440 cstr: 32379.14.JEIT240440
Funds:  The National Natural Science Foundation of China (62206130, 62073282), The Natural Science Foundation of Jiangsu Province (BK20200821), The Startup Foundation for Introducing Talent of NUIST (2020r075), The Natural Science Foundation of Hebei Province (F2022203092)
  • Received Date: 2024-06-03
  • Rev Recd Date: 2025-03-04
  • Available Online: 2025-03-15
  • Publish Date: 2025-04-01
  •   Objective  Virtual Reality Motion Sickness (VRMS) hinders the development of virtual reality technology and affects users’ experience, potentially threatening their health. Accurately assessing VRMS levels is essential for studying its causes and treatment strategies. ElectroEncephaloGram (EEG) provides a non-invasive, low-cost method with high temporal resolution, reflecting real-time neural activity, making it suitable for VRMS assessment. This paper introduces and improves an end-to-end EEG regression model based on Convolutional Neural Networks (CNN) and functional brain networks, termed Brain Connection-based CNN (BCCNN), to quantitatively recognize VRMS in users within a VR environment.  Methods  The BCCNN utilizes a 1D-CNN to filter EEG signals and compute correlation coefficients among electrodes, forming functional brain networks. It then employs CNN and fully connected layers to extract network features and perform regression analysis (Figure 2). This study optimizes the kernel size of the 1D-CNN and proposes a novel lead attention module to enhance feature extraction. The attention module, inspired by the squeeze-and-excitation mechanism, computes the weights from the filtered EEG signals rather than the extracted features. Additionally, the attention weights are derived from and applied to the leads of the EEG (Figure 3). To induce VRMS, a virtual reality scene called “VRQ test” is used. The subject’s EEG signal and subjective VRMS level, recorded via the Simulator Sickness Questionnaire (SSQ), are collected. These data are then used to validate the model (Figure 2). The model’s performance is evaluated using Mean Squared Error (MSE), Mean Absolute Error (MAE), and the goodness of fit (R2), comparing the predicted VRMS levels with the real values. A comparison with reference methods is conducted to assess the effectiveness of the BCCNN and its optimizations.  Results and Discussions  The results show that the optimized kernel size of the 1D-CNN layer is 16, reducing the average MSE of the original BCCNN by 6.53 (Table 1, Table 2, Figure 4). Additionally, the lead attention module further improves the BCCNN, lowering the average MSE by 7.65 compared to the original model, outperforming the channel attention module (Table 2). The optimized BCCNN achieves an average MSE of 15.10 and an average R2 of 96.63% in 10-fold cross-validation, significantly exceeding the original BCCNN and ten state-of-the-art and baseline models (Table 2). Among the reference methods, the combination of difference entropy and Gaussian process regression yields the best performance. Furthermore, reference models using a filter bank outperform other reference models, indicating that handcrafted processing of the EEG data can enhance model performance (Table 2). Visualizing the functional connections and extracted features of the BCCNN reveals that functional connections are stronger at higher VRMS levels compared to lower VRMS levels.  Conclusions  This study introduces and optimizes the BCCNN for assessing VRMS using EEG. The main innovation of this work lies in optimizing the kernel size of the 1D-CNN and proposing a novel lead attention module. The results demonstrate that these optimizations enhance the accuracy of VRMS assessment, with the updated model offering a more precise evaluation. EEG is thus expected to become a standard method for assessing VRMS in VR products. The proposed approach enables VRMS assessment during and after a user’s experience in a VR scene.
  • loading
  • [1]
    KENNEDY R S, DREXLER J, and KENNEDY R C. Research in visually induced motion sickness[J]. Applied Ergonomics, 2010, 41(4): 494–503. doi: 10.1016/j.apergo.2009.11.006.
    [2]
    NALIVAIKO E, DAVIS S L, BLACKMORE K L, et al. Cybersickness provoked by head-mounted display affects cutaneous vascular tone, heart rate and reaction time[J]. Physiology & Behavior, 2015, 151: 583–590. doi: 10.1016/j.physbeh.2015.08.043.
    [3]
    MASON B, 王麒. 虚拟现实会增加晕动病的风险[J]. 中国科技教育, 2017(2): 65–66.

    MASON B, WANG Qi. Virtual reality raises real risk of motion sickness[J]. China Science & Technology Educationb, 2017(2): 65–66.
    [4]
    易琳, 贾瑞双, 刘然, 等. 虚拟现实环境中视觉诱导晕动症的评估指标[J]. 航天医学与医学工程, 2018, 31(4): 437–445. doi: 10.16289/j.cnki.1002–0837.2018.04.008.

    YI Lin, JIA Ruishuang, LIU Ran et al. Evaluation indicators for visually induced motion sickness in virtual reality environment[J]. Space Medicine & Medical Engineering, 2018, 31(4): 437–445. doi: 10.16289/j.cnki.1002–0837.2018.04.008.
    [5]
    KIM J, OH H, KIM W, et al. A deep motion sickness predictor induced by visual stimuli in virtual reality[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(2): 554–566. doi: 10.1109/tnnls.2020.3028080.
    [6]
    KIM H G, LIM H T, LEE S, et al. VRSA net: VR sickness assessment considering exceptional motion for 360° VR video[J]. IEEE Transactions on Image Processing, 2019, 28(4): 1646–1660. doi: 10.1109/tip.2018.2880509.
    [7]
    WANG Yuyang, CHARDONNET J R, and MERIENNE F. VR sickness prediction for navigation in immersive virtual environments using a deep long short term memory model[C]. 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 2019: 1874–1881. doi: 10.1109/VR.2019.8798213.
    [8]
    耿跃华, 石金祥. 机器学习与脑电信号分析相结合的眩晕状态分类[J]. 中国组织工程研究, 2022, 26(29): 4624–4631. doi: 10.12307/2022.844.

    GENG Yuehua and SHI Jinxiang. Classification of vertigo state based on machine learning and electroencephalogram signal analysis[J]. Chinese Journal of Tissue Engineering Research, 2022, 26(29): 4624–4631. doi: 10.12307/2022.844.
    [9]
    JANG K M, WOO Y S, and LIM H K. Electrophysiological changes in the virtual reality sickness: EEG in the VR sickness[C]. The 25th International Conference on 3D Web Technology, 2020: 26. doi: 10.1145/3424616.3424722.
    [10]
    LIM H K, JI K, WOO Y S, et al. Test-retest reliability of the virtual reality sickness evaluation using electroencephalography (EEG)[J]. Neuroscience Letters, 2021, 743: 135589. doi: 10.1016/j.neulet.2020.135589.
    [11]
    NAQVI S A A, BADRUDDIN N, JATOI M A, et al. EEG based time and frequency dynamics analysis of visually induced motion sickness (VIMS)[J]. Australasian Physical & Engineering Sciences in Medicine, 2015, 38(4): 721–729. doi: 10.1007/s13246-015-0379-9.
    [12]
    LI Xiaolu, ZHU Changrong, XU Cangsu, et al. VR motion sickness recognition by using EEG rhythm energy ratio based on wavelet packet transform[J]. Computer Methods and Programs in Biomedicine, 2020, 188: 105266. doi: 10.1016/j.cmpb.2019.105266.
    [13]
    CHEN Y C, DUANN J R, CHUANG S W, et al. Spatial and temporal EEG dynamics of motion sickness[J]. NeuroImage, 2010, 49(3): 2862–2870. doi: 10.1016/j.neuroimage.2009.10.005.
    [14]
    KROKOS E and VARSHNEY A. Quantifying VR cybersickness using EEG[J]. Virtual Reality, 2022, 26(1): 77–89. doi: 10.1007/s10055-021-00517-2.
    [15]
    WU Jintao, ZHOU Qianxiang, LI Jiaxuan, et al. Inhibition-related N2 and P3: Indicators of visually induced motion sickness (VIMS)[J]. International Journal of Industrial Ergonomics, 2020, 78: 102981. doi: 10.1016/j.ergon.2020.102981.
    [16]
    PARK S, KIM L, KWON J, et al. Evaluation of visual-induced motion sickness from head-mounted display using heartbeat evoked potential: A cognitive load-focused approach[J]. Virtual Reality, 2022, 26(3): 979–1000. doi: 10.1007/s10055-021-00600-8.
    [17]
    LIU Ran, XU Miao, ZHANG Yanzhen, et al. A pilot study on electroencephalogram-based evaluation of visually induced motion sickness[J]. Journal of Imaging Science and Technology, 2020, 64(2): 020501. doi: 10.2352/J.ImagingSci.Technol.2020.64.2.020501.
    [18]
    CECOTTI H and GRASER A. Convolutional neural networks for P300 detection with application to brain-computer interfaces[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(3): 433–445. doi: 10.1109/tpami.2010.125.
    [19]
    TABAR Y R and HALICI U. A novel deep learning approach for classification of EEG motor imagery signals[J]. Journal of Neural Engineering, 2017, 14(1): 016003. doi: 10.1088/1741-2560/14/1/016003.
    [20]
    ACHARYA U R, OH S L, HAGIWARA Y, et al. Automated EEG-based screening of depression using deep convolutional neural network[J]. Computer Methods and Programs in Biomedicine, 2018, 161: 103–113. doi: 10.1016/j.cmpb.2018.04.012.
    [21]
    HU Jie, SHEN Li, ALBANIE S, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011–2023. doi: 10.1109/tpami.2019.2913372.
    [22]
    HUANG Jing, REN Lifeng, ZHOU Xiaokang, et al. An improved neural network based on SENet for sleep stage classification[J]. IEEE Journal of Biomedical and Health Informatics, 2022, 26(10): 4948–4956. doi: 10.1109/jbhi.2022.3157262.
    [23]
    HE Yanbin, LU Zhiyang, WANG Jun, et al. A self-supervised learning based channel attention MLP-mixer network for motor imagery decoding[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2022, 30: 2406–2417. doi: 10.1109/tnsre.2022.3199363.
    [24]
    NIU Weixin, MA Chao, SUN Xinlin, et al. A brain network analysis-based double way deep neural network for emotion recognition[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023, 31: 917–925. doi: 10.1109/tnsre.2023.3236434.
    [25]
    HUA Chengcheng, WANG Hong, CHEN Jichi, et al. Novel functional brain network methods based on CNN with an application in proficiency evaluation[J]. Neurocomputing, 2019, 359: 153–162. doi: 10.1016/j.neucom.2019.05.088.
    [26]
    KENNEDY R S, LANE N E, BERBAUM K S, et al. Simulator sickness questionnaire: An enhanced method for quantifying simulator sickness[J]. The International Journal of Aviation Psychology, 1993, 3(3): 203–220. doi: 10.1207/s15327108ijap0303_3.
    [27]
    ZHENG Weilong, LIU Wei, LU Yifei, et al. EmotionMeter: A multimodal framework for recognizing human emotions[J]. IEEE Transactions on Cybernetics, 2019, 49(3): 1110–1122. doi: 10.1109/tcyb.2018.2797176.
    [28]
    苗敏敏, 徐宝国, 胡文军, 等. 基于自适应优化空频微分熵的情感脑电识别[J]. 仪器仪表学报, 2021, 42(3): 221–230. doi: 10.19650/j.cnki.cjsi.J2006936.

    MIAO Minmin, XU Baoguo, HU Wenjun, et al. Emotion EEG recognition based on the adaptive optimized spatial-frequency differential entropy[J]. Chinese Journal of Scientific Instrument, 2021, 42(3): 221–230. doi: 10.19650/j.cnki.cjsi.J2006936.
    [29]
    PENG Hanchuan, LONG Fuhui, and DING C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(8): 1226–1238. doi: 10.1109/Tpami.2005.159.
    [30]
    SCHIRRMEISTER R T, SPRINGENBERG J T, FIEDERER L D J, et al. Deep learning with convolutional neural networks for EEG decoding and visualization[J]. Human Brain Mapping, 2017, 38(11): 5391–5420. doi: 10.1002/hbm.23730.
    [31]
    DING Wenlong, SHAN Jianhua, FANG Bin, et al. Filter bank convolutional neural network for short time-window steady-state visual evoked potential classification[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2021, 29: 2615–2624. doi: 10.1109/tnsre.2021.3132162.
    [32]
    PAN Yudong, CHEN Jianbo, ZHANG Yangsong, et al. An efficient CNN-LSTM network with spectral normalization and label smoothing technologies for SSVEP frequency recognition[J]. Journal of Neural Engineering, 2022, 19(5): 056014. doi: 10.1088/1741-2552/ac8dc5.
    [33]
    TAO Wei, WANG Ze, WONG C M, et al. ADFCNN: Attention-based dual-scale fusion convolutional neural network for motor imagery brain-computer interface[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2024, 32: 154–165. doi: 10.1109/TNSRE.2023.3342331.
    [34]
    JIN Jing, XU Ruitian, DALY I, et al. MOCNN: A multiscale deep convolutional neural network for ERP-based brain-computer interfaces[J]. IEEE Transactions on Cybernetics, 2024, 54(9): 5565–5576. doi: 10.1109/TCYB.2024.3390805.
    [35]
    SONG Yonghao, ZHENG Qingqing, LIU Bingchuan, et al. EEG conformer: Convolutional transformer for EEG decoding and visualization[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023, 31: 710–719. doi: 10.1109/TNSRE.2022.3230250.
    [36]
    MANE R, ROBINSON N, VINOD A P, et al. A multi-view CNN with novel variance layer for motor imagery brain computer interface[C]. 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society, Montreal, Canada, 2020: 2950–2953. doi: 10.1109/EMBC44109.2020.9175874.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(6)  / Tables(2)

    Article Metrics

    Article views (391) PDF downloads(51) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return