Shan, Yuanhe and Wang, Huifeng and Du, Hao and Peng, Haonan and Guan, Yueyuan and Huang, He and Zhang, Xiaowei and Jiao, Yunmei and Zhang, Chengyan and Pan, Zefeng and Zhao, Jiarui and He, Jianhua (2025) Vehicle Visual Perception Under Low Visibility Road Environments Based on AoP&DoP Multi-Polarization Parameter Characterization. IEEE Transactions on Intelligent Transportation Systems. pp. 1-16. DOI https://doi.org/10.1109/tits.2025.3630655
Shan, Yuanhe and Wang, Huifeng and Du, Hao and Peng, Haonan and Guan, Yueyuan and Huang, He and Zhang, Xiaowei and Jiao, Yunmei and Zhang, Chengyan and Pan, Zefeng and Zhao, Jiarui and He, Jianhua (2025) Vehicle Visual Perception Under Low Visibility Road Environments Based on AoP&DoP Multi-Polarization Parameter Characterization. IEEE Transactions on Intelligent Transportation Systems. pp. 1-16. DOI https://doi.org/10.1109/tits.2025.3630655
Shan, Yuanhe and Wang, Huifeng and Du, Hao and Peng, Haonan and Guan, Yueyuan and Huang, He and Zhang, Xiaowei and Jiao, Yunmei and Zhang, Chengyan and Pan, Zefeng and Zhao, Jiarui and He, Jianhua (2025) Vehicle Visual Perception Under Low Visibility Road Environments Based on AoP&DoP Multi-Polarization Parameter Characterization. IEEE Transactions on Intelligent Transportation Systems. pp. 1-16. DOI https://doi.org/10.1109/tits.2025.3630655
Abstract
Vehicle visual perception is essential for safe autonomous driving, especially in challenging low-visibility conditions. Polarimetric imaging has shown enhanced perception by improving target-background contrast and reducing glare. However, current research in polarimetric imaging for autonomous driving largely focuses on singular use of polarimetric features. Exploiting polarimetric information to enhance vehicle visual perception in low-visibility scenarios remains a critical challenge. This paper addresses this challenge by integrating multiple polarimetric parameter characterization with a deep learning model for semantic segmentation visual perception task, which is called TransWNet. The model combines Convolutional Neural Networks (CNN) and Transformer architectures, extracting features and contextual information of both Degree of Polarization (DoP) and Angle of Polarization (AoP) comprehensively during the encoding phase. In the decoding phase, it incorporates skip connections to effectively retain multimodal polarimetric information across deep and shallow features, generating output through feature fusion. To the best of our knowledge, this is the first work that jointly exploits DoP and AoP for vehicle scene understanding in degraded visibility. Experimental results demonstrate that TransWNet, by effectively leveraging multimodal polarimetric information, achieves significantly better performance in semantic segmentation of low-visibility traffic scenes, with marked improvements in mIoU, mPA, and Accuracy over all single-feature baselines. Compared with the baseline method, TransWNet improves the Accuracy by 3.44%.
| Item Type: | Article |
|---|---|
| Uncontrolled Keywords: | Low visibility, semantic segmentation, vehicle visual perception, Transformer, angle of polarization, degree of polarization |
| Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
| SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
| Depositing User: | Unnamed user with email elements@essex.ac.uk |
| Date Deposited: | 22 Dec 2025 15:50 |
| Last Modified: | 22 Dec 2025 15:50 |
| URI: | http://repository.essex.ac.uk/id/eprint/42436 |
Available files
Filename: Vehicle_Visual_Perception_Under_Low_Visibility_Road_Environments_Based_on_AoPampDoP_Multi-Polarization_Parameter_Characterization.pdf